Thursday, September 13, 2012

The Chinese Room - A Response


The Chinese Room is an interesting thought experiment. John Searle presents it in such a way that it is hard to refute him. In summary, the Chinese Room consists of a computer that takes Chinese characters as input and, after following the instructions of a computer program, produces Chinese characters as output. This computer passes the Turing test, which means that a human Chinese speaker cannot tell if he or she is actually communicating with a human (presumably a Chinese speaker) or a computer. Searle presents many arguments that others have made against the Chinese Room experiment. First, there is the systems reply, in which one should consider the whole system and not just the individual person. Next, there is the robot reply, which supposes that the computer is part of a semi-humanoid robot. Third, there is the brain simulator reply, which simulates the actual neurological processes of the brain. Finally, there is the combination reply, which takes all three of the above counterarguments and puts them together.

In my personal opinion, I agree with Searle to a point. I do not think a computer in the traditional sense can have “intentionality.” A computer takes some input given to it, processes it through a computer program’s steps, and then outputs something.  No matter what, when presented with the same conditions, this computer should produce the same output given an input. However, with human “intentionality” and “understanding,” it is not necessarily true that the same input will produce the same output. In a directly related example, if two people were to read about the Chinese Room experiment and talked about their reactions to it, they would produce two completely different reactions. These two people both have brains, which some would equate to the “computer” or "computer program." However, their “understanding” produced different outputs, each with their own merits and each valid based on their reasoning.

Another good example is the calculator. The calculator is a computer, albeit a simple one. However, it does not “understand” that two plus two equals four. It is programmed to output four when presented with that equation. In other words, it cannot prove, in a mathematical sense, why two plus two equals four. Similarly, it does not realize that the square root of two is an irrational number; it is just outputting what the number is (to a certain accuracy). However, in mathematics, there is a formal proof that leads one to the conclusion that the square root of two is irrational. This proof in itself is based on approved assumptions (axioms) that require human “understanding” to fully grasp. It is important to note, though, that there are computers that have aided in producing mathematical proofs that humans were not able to achieve. However, these computers do not fully “understand” what these proofs really mean; they are just taking the information given to them and linking them together in such a way that produces the desired output.

Even though I agree on some points that Searle makes, I do not completely agree with his reasoning. When countering the arguments against his thought experiment, he used the same argument to make his point: the computer (or system) does not understand Chinese. Therefore, it does not have intentionality like a human who understands Chinese actually does. This made his arguments seem very circular. Furthermore, I think in Searle’s counterarguments to the counterarguments, he was beginning to allude to things that make humans “human.” There is an x-factor that gives humans “intentionality” that computers cannot acquire. It is similar to how idioms have special meanings in certain languages, and they only truly make sense if one actually understands the language (versus simply being able to translate the words literally).

The Chinese Room is quite the controversial subject in both the fields of philosophy and of artificial intelligence. However, just as the Wikipedia article stated, it belongs more to the field of philosophy of the mind debate versus that of artificial intelligence. I took an artificial intelligence class, and I was honestly more concerned about creating programs that would perform what I wanted them to. If I wanted my robot to get out of a maze without falling into a pit, then that is all I was concerned about. Of course, there are implications in the artificial intelligence field, but it should not define the whole field.

I think Searle’s argument is becoming harder to harder to see in the modern day. Because of the development of non-traditional computers, like quantum computers, the line between “intentionality” and programming is becoming fuzzier. Searle’s arguments might have to be modified to apply to different types of computers. I do not believe his arguments are quite complete or completely convincing. Nonetheless, Searle presented a thought experiment that is definitely thought-provoking and interesting. It has captivated many people for the past thirty years in a significant way, and I think it will continue to do so for years to come.

No comments:

Post a Comment