Thought experiments can be harmful. Thought experiments have been a popular investigative device in artificial intelligence, cognitive science and philosophy of mind, where the theoretical underpinnings are nowhere near as well developed as in physics. In AI, the classic thought experiment is Searle’s (1980) Chinese Room argument (CRA): a computer program intended to ‘understand Chinese’ would not really do so because Searle himself could manually execute the same algorithmic steps while understanding nothing of Chinese. The argument has, of course, been thoroughly well debated (e.g., Harnad 1989; Penrose 1989; Copeland 1993; Boden 1994; Franklin 1995; Preston and Bishop (2002) and the peer commentary appearing with the original article), yet it is surprising how few commentators remark on the practicality of doing what Searle proposes. An exception is Copeland (1993, p. 127) who writes of “the built-in absurdity of Searle’s scenario”. What Searle and others seem ready blithely to assume - the existence of a Chinese ‘understanding’ program able to pass the Turing test (Turing 1950) - is so far beyond the current capabilities of AI and computer technology as to amount to science fiction. What could we possibly learn from such a fanciful conception? There is no realistic way of resolving any paradoxes which arise, save appeals to common sense, and we know from the example of quantum mechanics how fallible this is.
One can conceive of two (at least) possible rejoinders. It could be said that Einstein’s Gedankenexperiments were similarly fanciful: no one could chase after a light beam at the speed of light! Yet experimental tests of Einstein’s predictions were on the verge of being practical - by observing binary stars, eclipses of the sun, etc. So there seems to be a matter of degree here. Another point of view might be that it is too early to pronounce on the CRA: in time, Searle’s predictions might be proved (more or less) right or wrong by empirical means. My own feeling is that this will not happen: the proposed scenario is just too far from practical, experimental test. But perhaps some good can come out of the CRA if we substitute a task closer to the capabilities of current computer programs than understanding Chinese. This direction was first explored by Puccetti (1980), who substituted the chess room for the Chinese room, although to my mind he did not press the point home.
Searle’s CRA was chosen here for illustration, but there is no shortage of wildly implausible thought experiments in cognitive science and the philosophy of mind. One might mention the Twin Earth argument of Putnam (1975) - see Lloyd (1989) and Kim (1998) for discussion - which relies on confusing your earthly conception of some object with its apparently identical (but subtly different) counterpart in a twin world. Here, Dennett (1995, pp. 410-411) lays the argument bare by presenting “a more realistic example” which “could be” true, involving cats and Siamese cats. Next on my list is the thought experiment that actually convinced me that a paper such as this one was necessary. Dietrich (1989), in developing his argument that computational states involve content (semantics) as well as merely formal manipulation (syntax), writes: “Imagine that I had an exact duplicate made of me yesterday” (p. 123). Well, yes, imagine.
Finally, to negate the impression that thought experiments could never be of any great value in this area, I offer Braitenberg (1984) as a clear counter example. We have built ‘vehicles’ similar to those proposed in Braitenberg’s series of thought experiments in synthetic psychology, with interesting results (Damper, French, and Scutt 2000). Here, of course, the value of Braitenberg’s contribution lies in not departing too far (if at all) from what is practical.
Just thought you should know. (And it will make your brain hurt.)
1 comment:
thankyou 1798211... but we're more concerned with cerebral exertions at the moment...
Post a Comment