[Wittrs] Correction - Re: What would Wittgenstein have said?

  • From: "SWM" <SWMirsky@xxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Fri, 02 Apr 2010 14:48:38 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Gordon Swobe <wittrsamr@...> wrote:
>
> --- On Thu, 4/1/10, SWM <wittrsamr@...> wrote:

>
> > Where I wrote: "though I think Dennett's effort is finally doomed > > by 
> > his attachment to the idea of first-personness" I should, of
> > course, have written "Searle's effort". (I was too lazy to double > > check 
> > my words before hitting "send" so I guess that's what I get!
>
> Have you ever had a toothache? Did it happen in the first-person? Or did it 
> happen in the third-person, such other people could feel it too? :)
>
> -gts


What's your point? Do you somehow think I am denying subjectness, that we have 
first person states? That we have experience? Or that Dennett is such things? 
(Actually you did claim Dennett was doing that at one point, as I recall.)

Perhaps this is the problem then, i.e., that you read a claim that 
consciousness can be explained as so many physical processes doing so many 
different things as a denial of first personness. But that isn't the case at 
all. There is no such denial in this idea at all.

Maybe this really does come down to a matter of linguistic confusion, then, 
thereby validating Wittgenstein's point. But, if so, I don't see a way to 
dispell it if the problem persists. A little Wittgensteinian therapy might 
work, but how effective can it be against the spell of Searle's CR? And, 
indeed, the CR is kind of like a spell for that was my experience of it when I 
first read it. It cast a spell over me in that I found it compellingly 
convincing, at least at first -- until I thought about it more extensively.

My original reaction was to say, my god, Searle has hit that nail on the head. 
He has shown us exactly what is missing in claims of computational 
intelligence. It's not enough, I thought, to have lots of information and to be 
able to make the right transformations with that information. If nothing was 
registering in the entity performing the transformations, making the 
associative connections, then there wasn't any real intelligence because there 
wasn't any genuine understanding present. There was, in short, an absence of 
consciousness.

But then, after a while, I thought further about it and had to ask myself 
what's really missing? What do I have that the mindless machine lacks? And when 
I went over all the elements it seemed to me that, in fact, everyone of them 
COULD be a function of some computational process, too. But, I said, there is 
still something missing, right? There still isn't awareness!

And then I noticed that awareness in myself really consisted of layers of 
connections where one layer built on and intermingled with another. Being aware 
in myself involved many things including having an idea of the world(s) around 
me and of the selves I was (my physical self, my personal self, my historical 
self, my relational self, etc.). And all of these could be constructed, too, 
computationally, via the right kind of layered representational networks.

Then it seemed to me that one could actually "construct" a replicated 
consciousness having all the features I found in myself.

Well, isn't something still missing? I thought at first that there must be. But 
what? In fact, the harder I looked, the more I had to acknowledge that there 
was NOTHING about my subjective experience that could not be built up 
computationally, not even the sense of unity, of continuity of experience which 
could be obtained by enabling the various representational networks to 
interconnect and build still more pictures, this time of unities.

If that was so, then the computational model, given a sufficiently capacious 
machine platform, could, indeed, fully account for my sense of being conscious 
because the brain must be operating rather like a computer would if given the 
capacity to perform the same functionalities.

Now this doesn't mean we currently have the technology to do this or that we 
will ever have it for, indeed, the brain is remarkably capacious because of its 
level of complexity (as Gerald Edelman has shown) even if it is slower in 
operation than computers (as Jeff Hawkins notes). But the question before us is 
not what we can do today or what we will definitely achieve in the future but 
what is, at least, theoretically possible, given the technology to build a 
workable machine with sufficient capacity to match what a human brain can do.

In the end, of course, the computational model described by Dennett has to be 
tested because, as Hawkins notes, the slowness of brains has probably led to a 
different underlying mechanism, a simpler and more elegant one, than we find in 
computers. So it is certainly possible that the Dennettian model which relies 
on the computational analogy may not fully succeed in real world testing. But 
if a computer CAN perform all the kinds of functions in real time that a brain 
can, then, even if brains do rely on a different underlying mechanism (see 
Hawkins' proposal for how cortexes do intelligence), there is no reason a 
computationally based machine cannot have understanding as we do -- or that it 
cannot have the full range of features that we call "consciousness".

There is no reason, in fact, that there should be only one way to achieve 
consciousness if consciousness IS just the array of features we get from a 
particular kind of system with sufficient robustness performing a particular 
class of functions that brains perform.

But to get to this point, you have to see that consciousness is explainable as 
just such an array of functions performed by a process-based system, rather 
than as some kind of basic, bottom-line irreducible property belonging to 
certain physical events or processes and not others. (While consciousness COULD 
be that, saying it is involves making an assumption about it, not determining a 
fact,
and implies a dualistic model which appears to demand a more complex account of 
reality than Occam's Razor requires.)

In other words, the underlying intuition of what we recognize as consciousness 
has to be altered enough to admit of the possibility that a non-dualistic 
account of mind, which doesn't make a fetish of first personness, is feasible 
and realistic, given the things we know about the world.

Can Wittgenstein help with any of this? I don't think he addressed this sort of 
thing directly but his insights into language, particularly his notion of 
meaning as use, the idea of family resemblances in place of essential core 
meanings, and the impossibility of private language all point, on my view, to 
precisely this kind of reconceptualization of consciousness that makes a 
Dennettian model seem eminently sensible and possible.

And then there goes Searle and his CRA because Searle's argument finally 
depends on a failure to conceive of consciousness in this system-like way.

SWM

=========================================
Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: