[Wittrs] Re: An Issue Worth Focusing On

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 14 May 2010 14:18:48 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Joseph Polanik <jPolanik@...> wrote:
<snip>

>
> it is true that not every arrangement of atoms constitutes liquidity;
> but, simulating molecular motion with syntactical operations doesn't
> ever constitute liquidity.
>
> Joe


Of course but we're not talking about constituting liquidity but constituting 
subjectivity. It stands to reason that not every combination of processes will 
produce the same results.

The combination of certain kinds of constituents, molecules behaving in a 
certain way, produces wetness and liguidity but no one is suggesting they would 
produce subjectivity. Similarly, the combination of certain other kinds of 
constituents, in this case certain kinds of information processing, may be all 
that's needed to produce subjectivity even if they could not produce liquidity. 
 Subjectivity and liquidity, after all, are different kinds of phenomena.

Only the principle is being considered as a way of explaining the occurrence, 
not the idea that there would be equivalent results.

But much seems to hinge for you on the notion that computer processes running 
on computers are, finally, "syntax" or "syntactical operations" as Searle would 
have it. But what does it mean to say of computer programs that they are just 
"syntax" like he does?

Budd argues it's to say they are abstract, excluded from the world of causes, 
and so they cannot do anything. But if we are talking about computational 
processes running on the physical platform that is a computer, why should we 
take them to be different than other physical processes, say those in found in 
brains, in terms of their instantiation? And is it legitimate to think that 
THAT is what AI researchers attempting to replicate consciousness on 
computational platforms mean? Has any of them ever argued that the abstraction 
of a given algorithm is enough, absent implementation?

If brain processes (which are physical) can accomplish whatever it is they do 
to produce features like understanding, why should we think processes 
accomplishing the same thing, but on a different physical platform, should fail 
to do the same? Well, maybe brain processes are doing something computer 
processes can't?

Yes, maybe that is true. But maybe it isn't. We don't know enough at this stage 
to say. So the question is whether the hypothesis that computers can do it is 
logically disqualified as a candidate from the start (rather than seeking 
empirical evidence to determine whether they can or not).

Searle's CRA is an argument for logical disqualification. But it doesn't 
succeed at that because of the equivocation and because of its dependence on a 
suppressed premise which assumes its conclusion, a conclusion that is 
unsupported by any related argument in the CR/CRA -- and undefended, and even 
denied, by Searle himself and many of his adherents.

This, finally, is about whether, to account for consciousness in the world, 
it's enough to describe and understand it as a system level property which, if 
it is that, undermines the shaky logic of the CRA.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: