The researchers worked with a data set from Metametrics of approximately 300 students solving addition and subtraction problems and used those examples to reconstruct what the students may be doing right or wrong.

“We worked to come up with an efficient data structure and algorithm that would help the system sort through an enormous space of possible things students could be thinking,” says Andersen. “We found that 13 percent of these students made clear systematic procedural mistakes, and the researchers’ algorithm learned to replicate 53 percent of these mistakes in a way that seemed accurate. The key is that we are not giving the right answer to the computer–we are asking the computer to infer what the student might be doing wrong. This tool can actually show a teacher what the student is misunderstanding, and it can demonstrate procedural misconceptions to an educator as successfully as a human expert.”

The ultimate goal for the software is to provide teachers with a grading solution that also will generate reports on overall teaching outcomes for the classroom and solution areas where teachers need to focus more energy. The software will only work with simple math like subtraction and addition but will advance to algebra and more advanced equations in the future.

Also presenting the research will be doctoral student Molly Feldman, visiting intern Ji Yong Cho, Monica Ong ’19 and Zoran Popovic, professor of computer science at the University of Washington.

Laura Ascione
About the Author:

Leslie Morris is director of communications for Computing and Information Science at Cornell University. This article originally appeared online here.