Researchers are fine-tuning a computer system that is trying to master semantics by learning more like a human, reports the New York Times. Give a computer a task that can be crisply defined—win at chess, predict the weather—and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence. Few challenges in computing loom larger than unraveling semantics, or understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day. Now, a team of researchers at Carnegie Mellon University—supported by grants from the Defense Advanced Research Projects Agency (DARPA) and Google, and tapping into a supercomputing cluster provided by Yahoo—is trying to change that. The researchers’ computer was primed with some basic knowledge in various categories and set loose on the web with a mission to teach itself. The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of web pages for text patterns that it uses to learn facts—390,000 to date—with an estimated accuracy of 87 percent. These facts are grouped into semantic categories: cities, companies, sports teams, actors, universities, plants, and 274 others. NELL also learns facts that are relations between members of two categories…
- Closing the digital use divide with active and engaging learning - December 2, 2024
- 5 approaches that engage middle school students in STEM learning - December 2, 2024
- Computer science education sees more investment, but access gaps linger - November 26, 2024