[ home | newsletter | past | join | listserve | shareware | directory | links | md9 ]
The Least Robotic Human
Every year a handful of artificial intelligence programs compete for the Loebner Prize, a modern version of the Turing Test, designed to determine if an artificial intelligence is far enough advanced to persuade humans that it is a human intelligence. The human judges engage in sets of five minute remote computer chats, one with a human and one with an AI, and then judge which is which. So far, no AI has passed the test by persuading enough judges that it is human, but the AI who gets the highest score wins the annual Most Human Computer prize. The human competitors take an active role in persuading the judges that they are human, and each year the best is awarded the Most Human Human prize. In 2009 Brian Christian, set out to become the Most Human Human. His quest is in the trendy realm of experimental philosophy, which attempts to use real-world data to inform philosophical thinking. Most experimental philosophy relies on data from survey questions presented to humans. By entering the Loebner Contest, Christian finds a unique way to make himself a human guinea pig in experimental philosophy. In his book The Most Human Human he shares with us what he learns along the way to winning the prize. Unlike the other human confederates, who participate in the Loebner Contest helpfully but without ambition, Christian studies for months, reviewing all of the previous Loebner Contest transcripts, asking experts for their advice, and developing strategies to convince the judges that he is human. He decides early in his campaign that humans have to abandon the theory that there's anything unique about human intelligence. Many of Christian's strategies are specific to the Loebner Prize version of the Turing Test, but he explores a broad range of artificial intelligence issues, providing some fascinating insights. He notes that therapy chatbots lack a table of contents or other navigational aid. A human could use them for hours and never be able to tap into the part of the program that would be helpful. He cites research showing that "uh" and "um" are English words (with distinct meanings) and that other languages also have two such words. He points out that the jobs that AIs are taking from humans, like telephone customer service, had already become inhumanly robotic long before AI stepped into the picture. Until artificial intelligence program Deep Blue won a contest with chess grandmaster Garry Kasparov, chess was held up as the "game of kings," a grand example of human intelligence. Christian likens his role in the Loebner Contest to that of Kasparov in feeling like a defender of the entire human species. He devises creative strategies to prove that he is human by exploiting current deficiencies in artificial intelligence. The competing AI chatbots don't have the capacity to respond consistently over an extended conversation, so The Most Human Human takes every opportunity to display a core personal identity and a knowledge of the earlier exchanges in the conversation. He focuses on current events, including details of the contest location, mentioning in his first volley that the start of the contest was delayed for fifteen minutes. The competing AI chatbot programs cannot be updated that quickly. He skips small talk chitchat and jumps into areas that require deeper, wide-ranging, general knowledge. He works hard at interrupting the human judge, talking a lot and at great speed, and including conversational "holds" that invite comment. These are common in human conversation. In contrast, AI chatbots take turns in conversation and respond with maddeningly vague comments to anything that's not part of their program. Critics of artificial intelligence fall into three main categories: those who object to the Turing Test as inadequate or in need of updating, those who argue that current artificial intelligence is not the way that humans really think, and those who argue that human-level artificial intelligence is an impossibility. Short as it is, the history of artificial intelligence criticism is too long to summarize in this column, so here are examples of a critic who has received recent news coverage in each of the categories. Hector Levesque, a University of Toronto computer scientist, and his collaborators recently proposed a revised Turing Test they've named the Winograd Schemas. They argue that the Turing Test has become too focused on deception and that the design of the test needs to be modernized to cover the broadest possible range of human linguistic ability and to delve into human knowledge that is not readily searchable on the Internet. Douglas R. Hofstadter is a longtime AI critic who contends that current AIs do not think at all like humans. Like other critics in this camp, his primary point is that it is far more important to study the way humans actually think than to pursue the immediate practical benefits of current artificial intelligence technology and that his approach, focused on human thinking by using analogies, will have much greater longterm benefits. In the camp of those who claim human-level AI is impossible is Jaron Lanier, a musical composer, computer scientist (a pioneer in virtual reality pioneer) and popular science writer (Who Owns the Future?, You Are Not A Gadget). Lanier often sets up "straw men," weak forms of his opponent's arguments. He does, however, make some telling points, including that AI computer programs can be re-programmed into classical, non-intelligent computer programs that simply compute according to rules provided by humans, a form of philosopher John Searle's classic Chinese Room Argument against artificial intelligence. Christian refers to AI critics, and consults some of them, but avoids signing on to a critical viewpoint. This is a wise move. AI critics are forced to take a position on the nature of human intelligence, from which they end up having to retreat because knowledge about neuroscience and human intelligence changes rapidly and advances in artificial intelligence continue to reduce the defensible territory of intelligence that is unique to humans. The Most Human Human is reluctant to step outside his role as an experimental philosopher, content to engage in observation and empirical research. From this perspective, he debunks the "techno-rapture," the singularity envisioned by AI scientists like Ray Kurzweil, on the ground that even if an artificial intelligence passes the Turing Test (or a revised version of it), humans will keep getting smarter too. He makes this important point so subtly and modestly it is nearly lost, so I'll rephrase it a bit more boldly. The conclusion of The Most Human Human is that the Turing Test, or any intelligence test, will never be a permanent benchmark for artificial intelligence because human intelligence will continue to increase. Humans will stage a comeback again and again to win any intelligence test in competition with AIs, from chess to conversation. Christian proves this experimentally, by showing how he was able to do it — giving us a methodology by example. As with any scientific experiment, his conclusion is only as good as his own experimental result and other examples he's able to cite (such as the dynamic re-design of the game of checkers in response to an AI victory), and will be tested in time by future experiments. He hints at another conclusion that also deserves bolder statement: that the human pursuit of artificial intelligence is itself an unparalleled form of experimental human philosophy, an ongoing exploration of the nature of human intelligence. Sources and additional information: Brian Christian, The Most Human Human, Doubleday, 2011. The Loebner Prize in Artificial Intelligence, http://tinyurl.com/9men4 Gary Marcus, "Why Can't My Computer Understand Me?", The New Yorker, August 16, 2013, http://tinyurl.com/ld8pdae James Somers, "The Man Who Would Teach Machines to Think," The Atlantic, October 23, 2013, http://tinyurl.com/pcfejwf. Thanks to MLMUGer Bob Barton for this reference. Jaron Lanier, "Mindless Thought Experiments (A Critique of Machine Intelligence)," original version apparently first appeared in Stuart R. Hameroff, Alfred W. Kaszniak and Alwyn C. Scott, editors, Toward a Science of Consciousness II: The Second Tucson Discussions and Debates (Complex Adaptive Systems), A Bradford Book, 1998, http://tinyurl.com/jwshjmb The Chinese Room Argument, Stanford Encyclopedia of Philosophy, substantive revision, September 22, 2009, http://tinyurl.com/32s6ks |
|
[ home | newsletter | past | join | listserve | shareware | directory | Reviews ]
©2014 by Kathy Garges & MLMUG
Posted 01/08/14
Updated xx/xx/14