Wednesday, January 19, 2011

Reading #1: Chinese Room

Comments

Reference Information
   Title: Minds, Brains, and Programs
   Author: John R. Searle
   Publisher: Behavioral and Brain Sciences (1980)



Summary
"The Chinese Room" is a thought experiment performed by the author, John Searle. The premise is that research has created a computer that behaves as if it understands Chinese. It takes Chinese characters as input and follows a series of steps that produce Chinese characters as output. Searle assumes that it passes the Turing Test; the program can convince a Chinese speaker that he/she is talking to another Chinese speaker.
Searle asks: does the machine really understand Chinese or is it simulating the ability to understand? Searle calls understanding "strong AI" and simulating "weak AI".
He then supposes that he is in a room with the computer. Chinese characters are given to him, he runs the program and produces a Chinese character response. If the computer passed the Turing Test this way, so should he, he assumes. He does "speak a word of Chinese" yet, he can make someone believe he does. Since he himself does not understand Chinese, he assumes the computer does not understand Chinese either.

Discussion
I disagree with Searle. If a computer can convince a Chinese speaker that they are talking to another Chinese speaker, how is it that the computer does not understand Chinese? One can make the argument that someone had to program it all in--but isn't that essentially the process of learning? There may be times perhaps that a computer could not decipher the true intended meaning of a speaker, but you could argue that that sort of miscommunication happens all the time.
Perhaps it may be important to define what "understand" means in this context. Does it mean that the computer should be able to interpret words and give a logical response, or does it mean that the computer should be able to interpret words from an emotional standpoint? Humans are incredibly emotional and irrational. How would it respond to: "all these blogs will be the death of me!" Will it take it literally or figuratively?


Image: http://anita2506.files.wordpress.com/2008/08/wall-e-eve.jpg

4 comments:

  1. Actually, the reason that the computer isn't like a human mind is because it isn't learning. Even though the system as a whole appears to know Chinese, neither the man nor his implements knows a single letter of the language. In fact, no matter how many times the "program" is run, the man will learn nothing about the language, because he is merely matching letters from the input to the output, with no regard to what they mean.

    In its context, I believe the word "understand" is intended as the high-level interpretation of these symbols. Even though the man realizes that one character might differ from another, he has no concept of meaning, and thus really cannot interpret the language or read between the lines as a true Chinese speaker would.

    ReplyDelete
  2. You raise an interesting point in your final remarks. Eventhough I agree with Searle overall, some clarifications would be welcome on some of the assumptions taken for granted. We may wonder what the educational level of the chinese person is?

    ReplyDelete
  3. I agree that the humanity associated with intelligence can be a huge factor in the argument of Weak vs Strong AI. Are emotions or common sense really necessary to be intelligent?

    ReplyDelete
  4. To answer the question about a computer understanding chinese think about the following:
    The computer and the man in a room can both convince a chinese navtive. However the man doesn't understand a word of chinese and neither does the computer as they both use the same method. This is not to say that computers can't be intelligent.

    Angel Narvaez

    ReplyDelete