Stuart: We might be at great pains to disagree here. You seem to suggest that there is either some legislative or contemplative obligation on the part of whomever to decide whether "ethics" can be applied to something a robot is programmed to do. Or better yet, whether "robot" can be applied to something ethical. I want to suggest that if there was a legislative decision that we would need to make about this -- and even if it was made after studious contemplation -- that the matter would still not be philosophic; it would be lexicographic. That is, it would be an assembly of lexicographers who would make this judgment. And they would not make it a priori, of course -- they, in theory, would watch and see what people were doing with the terms "robot" and "ethics" when deploying the same. No one ever takes a family resemblance and says, "do we admit the next cousin," ... "the ayes have it." And so, whatever this contraption can do, it bears absolutely nothing upon any philosophic issue whatsoever. The only role for a philosopher would be to intervene in a discussion where people thought otherwise, showing him or her that they have been stung by the language game. For example, if someone said, "the robot can do ethics," and someone else said "no it can't" -- and still a third said "if it can, it isn't a robot" -- there would be nothing REAL to dispute here other than whether each like the other's language arrangement. The only true task of the philosopher is getting the people to see that no dispute exists. The contraption does what it does. They all see it. And they all see the same thing no matter how they play their words. The only issue would be whether one's word play is facile when compared to the other ways in the lexicon to express the matter. For example, an idealist saying the tree is not real but then saying that the dream is not real-real. (a doubling up). We don't have this issue here. Regards and thanks.