--- In Wittrs@xxxxxxxxxxxxxxx, Joseph Polanik <jPolanik@...> wrote: >> Right. But he never does explain what solely by virtue of running >> a program is supposed to actually mean. > perhaps; but, if so, that leaves the field open for you to define > what you mean by 'solely by virtue of running a program'. then > using your definition (and without focusing on the CPU) ... There's no need for me to define "solely by virtue of running a program," as that is not an important distinction for me. The focus on the CPU comes from focus in Searle's argument. > ..., would you explain how running a program could enable a computer > to experience an afterimage? Strictly speaking, it is not up to me to explain that. Additionally, I'm not quite sure what you mean by "afterimage". My interest has been in understanding cognition, without any particular concern as to whether what is happening can be called computation. Based on what I understand a cognitive system to be doing, I am skeptical as to whether that can be done with computation. If you could take a snapshot of what the brain is doing, and computationally emulate everything as at that snapshot, you should get a system that behaves similarly for at least a brief period. But I doubt that you get the learning "right" and I doubt that you get the appearance of acting with free will. But now I want to turn the tables on you. Could you get appropriate cognitive behavior, solely by virtue of following the principles of epistemology? If there is a problem with computation, it is that computation is solipsistic. What happens is mostly out of contact with reality. There are occasion contacts, as when inputting data. But, for the most part, computation is a solipsistic enterprise. But so is epistemology. AI/computationalism is mostly just mechanized epistemology. Epistemology is mostly nonsense. Regards, Neil ========================================= Need Something? Check here: http://ludwig.squarespace.com/wittrslinks/