[Wittrs] Re: Searle's CRA and its Implications

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Tue, 16 Mar 2010 17:14:33 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> I should have elaborated a bit more here to make my meaning clear, Stuart. I 
> wrote:
> > If you believe a million cpu's doing syntactic operations
> > on symbols will generate conscious understanding when one
> > cpu does not then it seems to me that you must believe
> > organic brains actually exist as multi-processor computers.
> I take it you believe intentionality/semantics will arise as an emergent 
> property in massive multi-processor systems. Given the findings of the CR 
> thought experiment, and given that no strong AI systems exist today, it seems 
> to me that either 1) you must believe in magic, or 2) you must believe the 
> brain counts as such a system.

I don't care for "emergent" as it suggests something mysterious. But if one 
simply means a higher level feature because the system of many processes 
working together is more than any single process or smaller sub-set of 
processes, then I'd be okay with the term.

Note, however, that you cannot speak of "the findings of the CR thought 
experiment" because a thought experiment has no findings. It is just what 
anyone takes from it and that, of course, will reflect one's various 
pre-existing ideas as well as capabilities, etc. To have "findings" you need to 
be able to offer something that is demonstrable in some public way, data that 
can be collected, examined objectively and worked over and then double checked 
against further testing. Just because the term "experiment" is used doesn't 
mean a "thought experiment" has the status of a scientific experiment. Indeed, 
"experiment" here looks more like a kind of metaphor or, perhaps, a family 
resemblance usage.

Moreover, the fact that no AI system of the type predicted by a theory like 
Dennett's currently exists today is no more proof that it could not than is the 
fact that for millions of years mankind (in all his genetic iterations) had no 
flying machines meant they were impossible.

Supposing that consciousness is a feature of a system rather than some 
particular physical process that is constitutive of the system is NOT to 
believe in magic. It's to have a different understanding of what's meant by 

> In the first case your argument amounts to a statement of religious faith.

Can you show how or why that description makes sense?

> In the second case you have what you might consider evidence to support your 
> theory. If you believe the brain really exists as a computer, and if you 
> believe semantics arises as an emergent property of its computations, then it 
> would seem possible to you that strong AI=true.

Well of course it seems possible to me "that strong AI is true". THAT's what I 
have been arguing all this time!

> But can we consider the brain a computer in the first place? I don't think so.

The issue is not whether the brain is a computer or a particular kind of 
computer or what a computer is! The issue is whether the things brain processes 
can do can also be accomplished on a computational platform with sufficient 
capacity to run the right kind of system.

> Is the Brain a Digital Computer?
> http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html
> -gts

Please make your arguments here. I don't have the time to keep jumping to 
off-list references and reading extensive articles. If something someone else 
has said is important to the case you are making, you can summarize or quote 
excerpts from them and provide a link, if it's available on-line (a source if 
it's not), and we can talk about it. But we don't get anywhere by just pointing 
to others as authorities. If what they say is to be a part of the discourse 
here, then please tell us what they say.



Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: