Dave, who knows more than a bit about this sort of complex computer mumbo-jumbo, pointed out that Watson likely isn’t actually processing the questions semantically, but rather basing its answers on statistical relationships. Though this doesn’t significantly change the things I address in my post, its an important distinction to make.
This evening was the first airing of the long-awaited Jeopardy! match between two of the game’s most successful players and Watson, a question-parsing supercomputer developed by IBM. It’s obvious that big Blue wouldn’t have given the go ahead for an official match until they could be relatively sure of a victory (or at least holding their own), that knowledge doesn’t make watching this machine play any less impressive. At least judging from what I’ve seen, Watson seems to be pretty on-par with the best human players, if not a little bit better, though that got me wondering about how much of the computer’s ability was due to its processing, and how much benefit it got from it uniquely non-human characteristics.
Our lab does a lot of work on investigating how things like emotion, anxiety, and stress influence learning, memory, and cognition. We often do this by having study participants answer a series of general knowledge trivia questions under different experimental manipulations. Put simply, our research and that of other labs has shown that stress of being in a test environment, anxiety about getting the answers right and, and focus on negative emotion if you get an answer wrong, all contribute to a person doing significantly worse in these kind of situations than others who aren’t as stressed, worried, or sensitive to negative feedback.
Add to those relatively high-level concerns the fact that humans are easy to distract, have different reading speeds, make inconsistent muscle movements, and are subject to things like being hungry or tired. Compare these facts to a computer who never needs to eat or sleep, has only one possible goal (answering questions), and doesn’t have any feelings to be hurt or things to worry about.
Suddenly bringing a human to a trivia fight might not be the best idea.
Of course, there are many areas where humans have a leg up on our robotic competition, particularly when it comes to “creative” thinking, language use, and joking. That being said, Watson seems to be surprisingly good at the kind of word-play problems that are common on Jeopardy!, which might be a good indication of how close it is to our level of ability.
Looking at the big picture, competing on a game show is unimportant compared to the possible real-world applications of a technology like Watson. As computers become more and more able to do things historically only done by humans, an obvious question is when that all-important line is crossed and the computers are better at being us than we are.
Though there are many ways to determine where this point is, and equally as many arguments against any particular measure, lets stick with Jeopardy! for a second. The way I see it, to be better than a human at answering trivia questions, Watson only has to be as good as the best human player, at least in terms of “thinking” ability. In a game between Watson and a human champion with exactly equal semantic-processing/question-answering abilities, the computer will always win. Because Watson doesn’t care if it gets a question wrong, but Ken Jennings really doesn’t want to be beaten by a computer. Because Watson will always hit the buzzer at exactly the right time, and Joe Human might slip. For all the reasons I talked about above, Watson doesn’t have to be any smarter than a human to beat us.