[Wittrs] Searle's CRA and its Implications

  • From: Joseph Polanik <jpolanik@xxxxxxxxx>
  • To: wittrsamr@xxxxxxxxxxxxx
  • Date: Sun, 14 Mar 2010 12:24:11 -0400



Gordon Swobe wrote:
--- On Fri, 3/12/10, SWM <wittrsamr@xxxxxxxxxxxxx> wrote:

The CRA shows that if the mind runs programs like a
computer, it cannot get semantics from syntax, even though
it has consciousness. The man in the room has consciousness,
after all.

The man in the room isn't there as a conscious entity but
as an automaton blindly following a list of rules. His
consciousness isn't relevant since he is acting in lieu of a
cpu.

His consciousness has *enormous* relevance, Stuart!

The thought experiment shows that the human mind attaches meanings to symbols by some means other than running formal programs. It shows us that the computationalist theory of mind fails to explain the facts.

The man in the room is there to say "I understand english; but, I do not understand chinese; although, I can manipulate the syntax of each language".

Joe



--

Nothing Unreal is Self-Aware

@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@
      http://what-am-i.net
@^@~~~~~~~~~~~~~~~~~~~~~~~~~~@^@


==========================================

Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/

Other related posts: