Re: [Wittrs] Wittgenstein on Machines and Thinking

  • From: "SWM" <swmirsky@xxxxxxxxx>
  • To: Wittgenstein's Aftermath <wittrs@xxxxxxxxxxxxxxxxxxx>
  • Date: Mon, 20 Jun 2011 23:05:03 -0000

--- In Wittrs@xxxxxxxxxxxxxxx, Sean Wilson <whoooo26505@...> wrote:
>
> Well Stuart, I don't know that we are able to communicate on the matter.
> 
> The empirical question is what the thing called "machine" (in the future) is 
> doing when it is said to be "thinking." If everyone understands both of these 
> things -- what the creature is and what it is doing -- then agreement or 
> disagreement with the proposition becomes only a matter of what sense of 
> "think" and "machine" one is 
> using.

That's always the case though, right? If we think share the same sense of the 
terms AND we agree as to the application of the said terms, then what else is 
there to argue about? 

>  Asking whether a "machine thinks" asks that you FIRST have in mind a sense 
> of "think" and a sense of "machine." 

Also a truism. To use any of our terms properly, we have to have a meaning in 
mind and use the term(s) in a way that's consistent with that meaning, no?

> In ordinary senses of these ideas, the answer seems to be "no" --

No to the question posed or no to whether a machine thinks?

> but ONLY BECAUSE OF THE GRAMMAR OF THE ORDINARY SENSE. That grammar is 
> predicated upon what "machines" being things like typewriters and personal 
> computers. 
> 

If by "the grammar" you just mean the rules of usage then that is pretty much 
the same as saying what we mean is the key. And it is, if we aim to use our 
terms right.

> If we say that a dog "thinks" and that a human "thinks," we don't necessarily 
> have the same sense of the idea.

Right. A dog's thinking looks to be quite different as what we do. Still, a 
machine that could demonstrate the intelligence of a dog would be a pretty big 
deal, no? 

> Better to say the dog dog-thinks. Or that humans human-think.

That strikes me as an odd way to phrase this. Why do we need to create 
specialized words for this when a little clarification is more than sufficient? 
We are interested in ordinary language applications here, right? Not an ideal 
language. 

> What you are doing with your futuristic thought experiment is taking a 
> grammar that is in play at a certain time -- human over here, machine over 
> there  -- and pretending that a "machine" is 
> created that human-thinks.

Pretending, yes. But that doesn't mean the question is without a point or 
addresses an impossible description. Certainly we can think of a machine that 
thinks more or less like humans. What more would be needed? The point is the 
thinking, the consciousness, not that it be exactly the same as our thinking 
and consciousness. How close should it be in order to qualify as thinking and 
consciousness? Since this looks to me as if it's a matter of placement along a 
continuum of features, we don't have to all be in the same place to be 
sufficiently similar to share a designation.  

> The reply here is to say that, once you do this, you've un-machined "machine" 
> or, if not, have violated "human-think," because that idea has a 
> species-specific grammar.  
> 

I think that's wrong, Sean. It looks like a perfectly legitimate position to me 
to claim that a machine can be conscious if consciousness is just so many 
functions being performed by certain kinds of processes which can be matched by 
a machine. It wouldn't have to be conscious precisely in the way we are to 
qualify as conscious either. It would need enough features in common at some 
level of commonality, that's all. And then this isn't about grammar or meanings 
but about what a machine actually can or cannot do.  


> The reason thinking can have a species-specific grammar is that brains are 
> different across creatures, making their form of life 
> different.


Yes, of course. And yet the question of machine intelligence or machine 
consciousness is not about exclusivity of forms of life.


> If you create a hypothetical where a creature can actually share our form of 
> life, you've completely changed the conditions for 
> languaging about it.


Well that is the point, isn't it? The thought experiment of a Commander Data 
does just that (to a certain extent, reflecting the limitations of the Data 
platform). Whether this can be done is an empirical question. Whether, if it is 
done, we would then say some machines can think, would be a matter of our 
usages. Perhaps we would want to think of Data as no longer a machine, too 
different from my toaster, say, although there are certainly many, many types 
of machine and, as Cayuse points out, we speak of organic machines (like 
ourselves) too. 

> There might be a new sense of "think" inaugurated by this -- surely there 
> would be a new sense of "machine." More likely, there would be new terms 
> entirely. Compare these advancements: clone, alien, cyborg, etc.     
>

Yes, I think that's true enough. But I don't think that obviates the 
significance of the question of whether a machine could think.
 
> 
> Now, if you deny that "thinking" has a species-specific grammar when using 
> the term, all you have done is introduce a different SENSE > of think.

I would argue it's the standard sense.

> Perhaps it's a child's sense. Or perhaps all it means are the behaviors 
> common to dogs, humans and your machine. 


We'd need to explicate it for sure. And, since it's likely that minds are best 
explained as bundles of features of an indefinite number occurring on a 
continuum, why would going through that kind of exercise be problematic in any 
effort to determine if machines could think? Certainly the very possibility 
implies that we must clarify what we mean by "think" in the case at hand and 
also what the range of thinking consists of (when to we no longer call 
something thinking?). 


> Whatever it is, it is LOCAL TO HOW YOU ARE PACKAGING THE INFORMATION. If we 
> all agree on what the information is, what package we use is neither here nor 
> there. This is what causes all the confusion: not what the "machine" is 
> doing. This is what makes debates go on for centuries. This is why they are 
> pointless. Like a dog chasing its tail.
>

No, I think it's mainly personalities that make debates go on for centuries on 
lists like this! However another factor, vis a vis the real centuries we find 
in the philosophical tradition is that there is a constant shifting in meanings 
and people can argue about meanings for a very long time indeed!

>     
> 
> <sigh>
> 
> 
> Let's just leave it like this. I don't think it is fruitful to continue the 
> matter. The gulf between our respective frameworks is just too large. I don't 
> want to go round and round in a telephone conversation in here. Let's just 
> leave it be.     
>

Fair enough. I don't see a lot to be gained by trying to rehash our differences 
again here. I would just go back to my original point which is that I don't 
think we can read Wittgenstein as proposing that it's impossible to suppose a 
machine can think, though the text you cited does seem to suggest that. My view 
is that if he did, indeed, hold that position, it would be a mark against him. 
But I don't think he actually held such a position (that one could never 
suppose a machine could think, any machine in any realistic sense of "think").

SWM

  
> Regards and thanks.
>
> Dr. Sean Wilson, Esq.
> Assistant Professor
> Wright State University
> Personal Website: http://seanwilson.org
> SSRN papers: http://tinyurl.com/3eatnrx
> Wittgenstein Discussion: http://seanwilson.org/wiki/doku.php?id=wittrs
> _______________________________________________
> Wittrs mailing list
> Wittrs@...
> http://undergroundwiki.org/mailman/listinfo/wittrs_undergroundwiki.org
>



_______________________________________________
Wittrs mailing list
Wittrs@xxxxxxxxxxxxxxxxxxx
http://undergroundwiki.org/mailman/listinfo/wittrs_undergroundwiki.org

Other related posts: