When used effectively, computer drills and tutorials can improve student performance in math and science—but the benefits of computer simulations and electronic sensors are less tangible, at least in studies that have been conducted so far, according to a research review that the National Science Foundation (NSF) plans to release this spring.

“The kind of applications that seemed to be more challenging or oriented to higher-learning objectives didn’t fare as well as simple tutorials,” said James A. Kulik, a research scientist from Michigan University who wrote the review, titled “Effects of Using Instructional Technology in Elementary and Secondary Schools: What Controlled Evaluation Studies Say.”

“The modes of computer use that had the best record of helping kids learn had simpler approaches,” Kulik said.

The report is still being reviewed by NSF, but its findings were described in a November briefing paper called “School Mathematics and Science Programs Benefit From Instructional Technology.” It comes as the federal Education Department prepares to launch a $15 million research project of its own to evaluate the use of technology in schools.

For this report, Kulik reviewed 36 controlled evaluations of instructional technology in elementary and secondary schools published since 1990, as well as some earlier reviews and less formal studies. He did not include theoretical works, case studies, or studies that did not measure learning outcomes.

The 36 evaluations focused on four types of computer applications: integrated learning systems (ILS) in mathematics, computer tutorials in science, computer simulations in science, and microcomputer-based laboratories.

ILS in mathematics

ILS combine drill-and-practice questions and tutorial lessons. They require students to respond frequently, provide students with feedback on their answers, and keep detailed records on student performance.

Kulik reviewed 16 studies that evaluated students who used ILS to study mathematics. Each of the 16 studies found test scores were higher among students who were taught with the help of ILS software. The scores improved by a 0.38 standard deviation, or from the 50th to the 65th percentile.

The research suggested that schools used ILS only 15 percent to 30 percent of the recommended time, and that the systems were treated as an add-on to the curriculum rather than an intrinsic part of instruction.

Kulik said the results might have been even better if researchers focused on “model implementations” rather than typical ones.

“Most evaluation studies from the 1960s and the 1990s suggest that students benefit from ILS instruction in mathematics,” Kulik wrote. “In the typical evaluation study of the 1980s, ILS instruction raised mathematics test scores by about 0.4 standard deviations. More important, in the typical evaluation study from the 1990s, ILS instruction raised mathematics test scores by about the same amount.”

Computer tutorials in science

Unlike the broad tutorial programs used in ILS instruction, science tutorials usually focus on specific topics. Kulik reviewed six evaluations of computer tutorials in science that were published since 1990.

“Overall, evaluations of computer tutorials in the natural and social sciences have produced very favorable results. Effects on test scores in most studies were large enough to be considered educationally meaningful, and tutoring effects on student attitudes were even more notable,” he wrote.

The review also found that students’ attitudes toward math and science improved with the use of computer tutorials and drills. “In most cases, student attitudes were much better in classes with computer tutorials,” Kulik said.

Computer simulations in science

Computer simulations, such as a virtual frog dissection, provide students with theoretical or simplified models of the real thing. Students can manipulate the models so they can observe the results.

For this review, Kulik examined six controlled evaluations of computer simulations in biology, chemistry, earth science, and physics published since 1990. The studies were fairly short and focused on one simulation each.

Two found positive effects, but two found negative effects. The median effect raised test scores by a 0.32 standard deviation, or from the 50th to the 63rd percentile.

“The results of these studies suggest that computer simulations can sometimes be used to improve the effectiveness of science teaching, but the success of computer simulations is not guaranteed,” Kulik wrote.

Microcomputer-based laboratories

Microcomputer-based laboratories use electronic sensors to collect data such as temperature, heat, light, or pressure so students can witness the laboratory experiment and a graph development at the same time.

Kulik reviewed eight studies carried out in junior and senior high schools in biology, chemistry, graphing, and physical sciences. Seven of the eight studies found either small negative or small positive effects of electronic sensors on student learning.

“The remaining study found a very strong effect … but the study had a design flaw that might account for the anomalous result,” Kulik wrote.

Interpreting the findings

In explaining why computer simulations and electronic sensors didn’t fare as well in the research, Kulik acknowledged these are relatively new technologies compared with computer tutorials and drills and therefore haven’t been studied as thoroughly. Also, the research that is available on them is not very current or complete.

“We’re dealing with evaluations found in 1990 to 2000, so they are definitely lagging behind the cutting edge,” Kulik said.

Improvements and new developments in computer simulations and sensors might enhance their impact on student performance, he suggested—and future evaluations might show this.

“The report needs to be analyzed to see whether the technologies reviewed reflect technologies used today,” said Eric Hamilton, acting director for research, evaluation, and communication in NSF’s Education and Human Resources division. “I think there will be a lot of valuable insight that comes out of this report, but there will also be a lot that won’t be useful because it comes from obsolete technologies.”

Educators who spoke with eSchool News largely agreed.

“It sounds like we need more—and more thorough—research,” said Nancy Messmer, director of library media and technology for the Bellingham, Wash., Public Schools.

“I wouldn’t expect the completion of one computer simulation of a frog dissection or a few electronic labs … to increase student achievement,” Messmer said. “I would hope that these activities are part of a continuous sequence of student inquiries involving questioning, research, analysis, and reporting. If students were actively engaged in such activities, then I would hope to find some impact on student achievement.”

At least one educator said he wasn’t surprised by the study’s findings.

“Few things are better for making a lasting impression on students than reality,” said Raymond Yeagley, superintendent of the Rochester School District in New Hampshire. “I question, however, whether impact on student learning—usually translated as test scores—is the only way to measure the value of technology in schools.”

He added, “Our district’s use of technology has improved our assessment capabilities for accountability and tracking student progress, has improved our teachers’ ability to individualize and provide greater opportunities for our most challenged and most gifted students, and has certainly introduced our students to a much broader spectrum of research materials than we could previously house in the school library. These benefits are, in my opinion, worth the cost of the technology even if the scores don’t increase on tests of frog anatomy.”

Links:

“School Mathematics and Science Programs Benefit From Instructional Technology”
http://www.nsf.gov/pubsys/ods/getpub.cfm?nsf03301