Minds, Brains, and Programs by John R. Searle
Aristotle, On Plants, The Revised Oxford Translation, Edited by Jonathan Barnes, Volume Two
Author Bio:
John Searle is a professor of psychology at the University of California, Berkeley and was born in Denver Colorado. He attended both University of Wisconsin-Madison as well as Oxford for his education.In 1959 he became the first tenured professor to join the free speech movement. His early works involved "Speech Acts" and studying what creates the rules of languages. He then later moved on to studying Artificial Intelligence and analyzing what consciousness would mean for a machine.
Summary:
- Hypothesis: That computers do not have the capability to 'learn' or 'understand' a task or skill.
- Methods: In using his analogy of the Chinese Room, Dr. Searle creates for us an environment where we, the reader, as well as many different pieces of paper, take the part of the computer processor. By analyzing the process by which we would translate incoming Chinese symbols into English output or visa verse we are left to determine whether this is enough to consider ourselves as 'knowing' Chinese.
- Results: He is able to construct a solid argument against a computers understanding of a task.
- Contents: Dr. Searle proved that, although you can construct a system where you put in an input and get consistently correct output, that does not necessarily mean that the system understands the task.
Discussion:
This article was a very interesting read. The point was extremely well presented in a very solid analogy and I completely agree with Searle and really enjoyed the way he presented his argument. Whatever happens ultimately computers are created by humans. They will always simply be following instructions that we give them. The concept of imparting sentience upon another being is something outside of the realm of artificial intelligence, I think. It has less to do with receiving input and producing output correctly, and more to do with creating a whole environment for information to be received and interpreted like the human brain does, which is a very abstract concept. If we were to assemble a human body and give it instructions, would that still be a machine? Or would it be a person? These types of questions begin to crop up and the line between independent thought and instruction sets gets blurred. While we can use our current approach to simulate understanding and learning, it is not truly either of those things. One of the big weaknesses in this paper i believe was that it did not address future advances in technologies and where that would leave us in terms of an intelligent machine. He simply touches on the fact that with our current technologies, we cannot create a learning machine. While fine for the purpose of his argument it would have helped to strengthen his argument to provide some kind of counter example of some kind of hypothetical machine that really does understand tasks. Artificial Intelligence is the next biggest field in computer science and so much can be accomplished by teaching a computer to learn and understand a task. This paper should have huge impacts on how people approach making AI in the future.
No comments:
Post a Comment