A gigantic computer created by IBM specifically to excel at answers-and-questions left two champs of the TV game show “Jeopardy!” in its silicon dust after a three-day tournament, a feat that experts call a technological breakthrough in artificial intelligence.
Now, the machine—called Watson—will be tested in two university medical centers to help doctors sift through large amounts of data quickly.
Watson earned $77,147, versus $24,000 for Ken Jennings and $21,600 for Brad Rutter. Jennings took it in stride, writing “I for one welcome our new computer overlords” alongside his correct Final Jeopardy answer.
The next step for the IBM machine and its programmers: taking its mastery of the arcane and applying it to help doctors plow through blizzards of medical information. Watson also could help make internet searches far more like a conversation than the hit-or-miss things they are now.
Watson’s victory leads to the question: What can we measly humans do that amazing machines cannot do or will never do?
The answer, like all of “Jeopardy!,” comes in the form of a question: Who—not what—dreamed up Watson? While computers can calculate and construct, they cannot decide to create. So far, only humans can.
“The way to think about this is: Can Watson decide to create Watson?” said Pradeep Khosla, dean of engineering at Carnegie Mellon University in Pittsburgh. “We are far from there. Our ability to create is what allows us to discover and create new knowledge and technology.”
Experts in the field say it is more than the spark of creation that separates man from his mechanical spawn. It is the pride creators can take, the empathy we can all have with the winners and losers, and that magical mix of adrenaline, fear, and ability that kicks in when our backs are against the wall and we are in survival mode.
What humans have that Watson, IBM’s earlier chess champion Deep Blue, and all their electronic predecessors and software successors do not have and will not get is the sort of thing that makes song, romance, smiles, sadness, and all that jazz. It’s something the experts in computers, robotics, and artificial intelligence know very well because they can’t figure out how it works in people, much less duplicate it. It’s that indescribable essence of humanity.
Nevertheless, Watson—which took 25 IBM scientists four years to create—is more than just a trivia whiz, some experts say.
Richard Doherty, a computer industry expert and research director at the Envisioneering Group in Seaford, N.Y., said he has been studying artificial intelligence for decades. He thinks IBM’s advances with Watson are changing the way people think about artificial intelligence and how a computer can be programmed to give conversational answers—not merely lists of sometimes not-germane entries.
“This is the most significant breakthrough of this century,” he said. “I know the phones are ringing off the hook with interest in Watson systems. The internet may trump Watson, but for this century, it’s the most significant advance in computing.”
And yet Watson’s creators say this breakthrough gives them an extra appreciation for the magnificent machines we call people.
“I see human intelligence consuming machine intelligence, not the other way around,” said David Ferrucci, IBM’s lead researcher on Watson. “Humans are a different sort of intelligence. Our intelligence is so interconnected. The brain is so incredibly interconnected with itself, so interconnected with all the cells in our body, and has co-evolved with language and society and everything around it.”
He added: “Humans are learning machines that live and experience the world and take in an enormous amount of information—what they see, what they taste, what they feel, and they’re taking that in from the day they’re born until the day they die. And they’re learning from all the input all the time. We’ve never even created something that attempts to do that.”
The ability of a machine to learn is the essence of the field of artificial intelligence. And there have been great advances in the field, but nothing near human thinking.
“I’ve been in this field for 25 years, and no matter what advances we make, it’s not like we feel we’re getting to the finish line,” said Carnegie Mellon University professor Eric Nyberg, who has worked on Watson with its IBM creators since 2007. “There’s always more you can do to bring computers to human intelligence. I’m not sure we’ll ever really get there.”
Bart Massey, a professor of computer science at Portland State University, quipped: “If you want to build something that thinks like a human, we have a great way to do that. It only takes like nine months and it’s really fun.”
Working on computer evolution “really makes you appreciate the fact that humans are such unique things, and they think such unique ways,” Massey said.
Nyberg said it is silly to think that Watson will lead to an end or a lessening of humanity. “Watson does just one task: answer questions,” he said. And it gets things wrong, such as saying grasshoppers eat kosher, which Nyberg said is why humans won’t turn over launch codes to it or its computer cousins.
Take Final Jeopardy on Feb. 15, which Watson flubbed and its human competitors handled with ease. The category was U.S. cities, and the clue was: “Its largest airport is named for a World War II hero; its second largest, for a World War II battle.”
The correct response was Chicago, but Watson weirdly wrote, “What is Toronto?????”
A human would have considered Toronto and discarded it because it is a Canadian city, not a U.S. one, but that’s not the type of comparative knowledge Watson has, Nyberg said.
“A human working with Watson can get a better answer,” said James Hendler, a professor of computer and cognitive science at Rensselaer Polytechnic Institute. “Using what humans are good at and what Watson is good at, together we can build systems that solve problems that neither of us can solve alone.”
That’s why Paul Saffo, a longtime Silicon Valley forecaster, and others, see better search engines as the ultimate benefit from the “Jeopardy!”-playing machine.
“We are headed toward a world where you are going to have a conversation with a machine,” Saffo said. “Within five to10 years, we’ll look back and roll our eyes at the idea that search queries were a string of answers and not conversations.”
The beneficiaries, IBM’s Ferrucci said, could include technical support centers, hospitals, hedge funds, researchers, or others who need to make lots of decisions that rely on lots of data.
For example, a medical center might use the software to better diagnose disease. Because a patient’s symptoms can generate many possibilities, the advantage of a Watson-type program would be its ability to scan the medical literature faster than a human could and suggest the most likely result. A human, of course, would then have to investigate the computer’s finding and make the final diagnosis.
That’s how the two university medical centers that have signed up to test the technology plan to use it.
The agreements with the Columbia University Medical Center and the University of Maryland School of Medicine will be the program’s first real-world tests outside of the trivia game show and IBM’s laboratories.
Eliot Siegel, a professor at the Maryland university’s medical school, said other artificial intelligence programs for hospitals have been slower and more limited in their responses than Watson promises to be. They have also been largely limited by a physician’s knowledge of a particular symptom or disease.
“In a busy medical practice, if you want help from the computer, you really don’t have time to manually input all that information,” he said.
Siegel says Watson could prove valuable one day in helping diagnose patients by scouring journals and other medical literature that physicians often don’t have time to keep up with.
Yet the skills Watson showed in easily winning the three-day televised “Jeopardy!” tournament also suggests shortcomings that have long perplexed artificial intelligence researchers and which IBM’s researchers will have to fix before the software can be used on patients.
“What you want is a system that understands you’re not playing a quiz game in medicine and there’s not one answer you’re looking for,” Siegel said.
“In playing ‘Jeopardy!’, there is one correct answer. The challenge we have in medicine is we have multiple diagnoses and the information is sometimes true and sometimes not true and sometimes conflicting. The Watson team is going to need to make the transition to an environment in which it comes up with multiple hypotheses—it will be a really interesting challenge for the team to be able to do that.”
Siegel said it would likely be at least two years before Watson will be used on patients at his hospital. It will take that much time to train the program to understand electronic medical records, feed it information from medical literature, and test whether what it’s learned leads to accurate analyses of patient symptoms.
He said he wasn’t bothered by Watson’s on-screen blunders; even highly trained medical professionals make dumb mistakes.
“I will take an assistant that is that fast and that powerful and that tireless any time,” he said. “This is going to be something that 10 years from now will be a completely accepted way that we wind up practicing.”
Watson could be a boon for IBM, the world’s biggest computer services company, if it works as promised in the real world. IBM makes a mint on “analytics” software that helps companies mine their data and predict future trends, such as shopping patterns at a retailer, for instance.
Watson currently runs on 10 racks of IBM servers, but computing power generally doubles every two years, so the amount of hardware needed to run the same program soon will be significantly less. And the program can be tweaked to run slower, or scan less information, to make the program easier to deploy in a university or business setting.
IBM hasn’t disclosed prices for the commercial sale of Watson, nor details of the financial arrangements with the hospitals.
- ‘Buyer’s remorse’ dogging Common Core rollout - October 30, 2014
- Calif. law targets social media monitoring of students - October 2, 2014
- Elementary world language instruction - September 25, 2014