Saturday, April 21, 2007
Humanity has long held out hope of achieving the dream of artificial intelligence, or AI, as showcased in films like Star Wars and on television in Star Trek: the Next Generation. Perhaps equally, the nightmare of artificial intelligence gone wrong, seen in movies like Terminator and The Matrix, is a specter of the future all hope to avoid. Yet as humankind moves closer to reaching the goal of AI, popular theories about the test for such an milestone discovery have come under some scrutiny.
One such test, the Turing Test, involves convincing a human being that he or she is speaking with another human being in a text-based chat scenario, when in fact he or she is actually speaking with a machine. Put forward by Alan Turing, a mathematician often credited as the father of modern computational science, the test specifies that a human chat with another human and a machine, and sight-unseen, determine which is which. No machine has, as yet, passed the test.
The test has many benefits, including its simplicity and its reliance on a specific behavioral test, rather than complicated and potentially unanswerable questions about the human mind and soul. On the other hand, the test has serious drawbacks. For one, the test fails to address the question of whether there is a substantive difference between intelligence and mimicry. Also, the test relies on the sophistication of the human questioner. An artificial intelligence researcher familiar with the program would have enough knowledge to potentially trip up the machine, whereas someone unfamiliar with the Turing Test might be more easily fooled by a chatbot.
The Turing Test, though interesting in that it holds out the potential for an easily verifiable test for artificial intelligence, nonetheless fails to take into consideration a variety of key components of the dream of AI. First, as it is entirely text-based, the test does not incorporate several of the more intriguing and potentially useful features of a fully-functional humanoid AI, such as full motor capabilities and the ability to recognize the environment and even individual faces. What is more, the test fails to adequately explain the difference between intelligence and mimicry. Even if a machine were to pass the Turing test, this might reflect more on the sophistication of the programming than its "intelligence."
Truly intelligent machines, if such a thing were possible, should have certain characteristics that set them apart from other machines, and conversational ability is perhaps the least interesting or beneficial. Artificial intelligence should be able to learn from its environment and from its own mistakes, and have the potential for complex reasoning. These two features of human intelligence, if replicated in a machine, would be of infinitely more value than the ability to carry on a conversation, even though this might be a separate goal in its own right.