[Wittrs] The Core Idea: System vs. Process Properties

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Wed, 24 Mar 2010 13:33:21 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:

> --- On Tue, 3/23/10, SWM <wittrsamr@...> wrote:
>
> > The system in question doesn't understand because it isn't specked to
> > understand, it's specked to do a rote operation.
>
> Looks to me like Searle specked the room to do what computers *actually* do: 
> rote syntactical operation on symbols.
>

But the question is what does it take to achieve understanding. Computers 
certainly operate mechanically, according to the algorithms (instructional 
steps) they are fed via programming. But brains must also operate mechanically, 
on a cellular leve, unless there is some as yet non-physical operation at work 
in which case we are back to a presumption of dualism. If everything is 
physical at bottom, then brains and computers are both physical, too. If 
physical brains are enough to engender understanding, why shouldn't physical 
computers be enough?

Well, it looks like they aren't if we look at the basic constituents of their 
operations. Just zeros and ones in long combinations in step by step sequences, 
as transformed into electrical changes in the circuitry of the computers. But 
no one suggests this is the understanding anymore than anyone is suggesting 
that the passing of an electrical charge between neurons according to certain 
sequences in brains is, itself, the understanding. What's at issue is how do 
these basic operations become the features of subjectness we call 
consciousness, including, of course, understanding.

The CR is not specked to do all the things, all the different processes 
performing all the different functions that we see by introspection in 
ourselves when we have an instance of understanding. All the CR is built to do 
is to receive certain kinds of input, match that input to a stored file of 
existing symbols and select the one that matches and send it out as an output. 
Does that look like what a brain does to you? Does that have any resemblance to 
what happens in  your own mind when you realize you have understood something?

If not, why should you expect the CR to have understanding? But the next and 
really critical step is to ask: Why does the failure of the CR, specked only to 
do rote responding, to be conscious imply anything about any other 
configuration of the same constituents found in the CR? Why wouldn't a 
computational system that did the kinds of things that happen in us when we 
understand things (representing, sorting, storing, associating, etc., across a 
broad range of highly complex pictures or models of the world) not be able to 
achieve what a rote responding device like the CR cannot?



> Your argument about specks seems equivalent to an argument that pigs can't 
> fly because nature underspecked pigs. If only pigs had wings, you might 
> argue, then pigs could fly. And I certainly agree with that. If pigs had 
> wings then they could fly and if computers could be specked to have 
> understanding then they could understand.
>

The reason you think they can't lies in your commitment to the idea that 
whatever understanding is, it must be a property of the particular processes 
that make up computers. But as I have repeatedly tried here to make clear to 
you, what if it is a system property, not a process property?


> > 2) The specific system implemented in the CR is inadequate
> > to achieve understanding because it only performs a
> > relatively limited function.
>
> It does what computers do, Stuart. Have you ever written a program? They do 
> form-based synctatic operations on symbols, just as the CR does.
>

And brains do what they do, bearing in mind that there is reason to believe 
that not all brains can do consciousness. This isn't about what the 
constituents of the CR are like, it's about what they can do in the right 
combination.


> > THE POINT STILL REMAINS THAT THE CHINESE SUB-SYSTEM DOESN'T UNDERSTAND
> > BECAUSE IT LACKS THE CONNECTIONS HE TELLS US THE MAN OR THAT HUMAN
> > CHINESE SPEAKERS HAVE.
>
> It lacks understanding because syntax is neither constitutive of nor 
> sufficient for semantics!
>

That is a circular argument or it's merely an article of faith because the 
question before us is whether syntax can cause consciousness! You can't simply 
reach that conclusion by affirming it. (I suggest you think about my previous 
point that that particular premise equivocates its terms, specifically: "is 
neither constitutive of nor sufficient for" is used in two ways in the CRA.)

> Even if we added more parallel operations to the system -- what you want to 
> call "connections" -- those additional operations would exist also as 
> syntactic operations, and the system still would not get semantics from 
> syntax. A billion x 0 still equals 0.
>

How do you know that? Doesn't this just depend on your commitment to the idea 
that understanding must be a property of the processes rather than the 
system(s) they constitute? But if that commitment is dispensed with, your claim 
is left without a basis for arguing its truth.

Note that this isn't about the nature of computer processes, it's about the 
nature of the systems they can be combined to constitute.


> > It doesn't connect symbols to mental images and ideas which have further
> > connections to a whole host of others in a layered network of
> > representations of the world.
>
> In a computer, images come in binary form just like any other input. The 
> computer does form-based operations on them just as the man does with symbol 
> inputs.
>
>
> -gts


And how do you think brains produce the images we have? Do they just magically 
produce our mental pictures? If so, why, when we open a brain and look inside, 
don't we see them?

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts:

  • » [Wittrs] The Core Idea: System vs. Process Properties - SWM