Researchers are fine-tuning a computer system that is trying to master semantics by learning more like a human, reports the New York Times. Give a computer a task that can be crisply defined—win at chess, predict the weather—and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence. Few challenges in computing loom larger than unraveling semantics, or understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day. Now, a team of researchers at Carnegie Mellon University—supported by grants from the Defense Advanced Research Projects Agency (DARPA) and Google, and tapping into a supercomputing cluster provided by Yahoo—is trying to change that. The researchers’ computer was primed with some basic knowledge in various categories and set loose on the web with a mission to teach itself. The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of web pages for text patterns that it uses to learn facts—390,000 to date—with an estimated accuracy of 87 percent. These facts are grouped into semantic categories: cities, companies, sports teams, actors, universities, plants, and 274 others. NELL also learns facts that are relations between members of two categories…

Click here for the full story

About the Author:

Laura Ascione

Laura Ascione is the Managing Editor, Content Services at eSchool Media. She is a graduate of the University of Maryland's prestigious Philip Merrill College of Journalism. When she isn't wrangling her two children, Laura enjoys running, photography, home improvement, and rooting for the Terps. Find Laura on Twitter: @eSN_Laura