The use of certain educational software programs to help teach reading and math did not lead to higher test scores after a year of implementation, according to a major federal report released April 5.
The $10 million study, issued by the U.S. Department of Education (ED), was distributed to members of Congress–and its findings could affect future funding for school technology. That worries some advocates of educational technology, who question how the study was conducted.
The study set out to examine the effectiveness of 15 classroom software programs in four categories: early reading (first grade); reading comprehension (fourth grade); pre-algebra (sixth grade); and algebra (ninth grade).
Researchers studied the impact of the school software products in question on about 10,000 students in 439 classrooms across 132 schools. They found achievement scores were not statistically higher in classrooms using these reading and math programs than in classrooms without the products.
Ed-tech experts say the results aren’t surprising, given how the software was implemented in the participating schools.
Nearly all the teachers received training on the products during the summer or early fall and believed they were well prepared to use the technology in their classrooms. But their confidence waned as the school year went on, the study indicates: "Generally, teachers reported a lower degree of confidence in what they had learned after they began using products in the classroom."
This suggests participating teachers didn’t get the kind of technology coaching or peer support throughout the school year that other research demonstrates is a key element of success.
"Brief training at the beginning of the year is not sufficient. Ongoing and sustainable professional development that provides support and mentoring or coaching for teachers ensures that technology tools and resources are used in ways that lead to increased student achievement," said Mary Ann Wolf, executive director of the State Educational Technology Directors Association.
What’s more, student use of the software accounted for only about 10 or 11 percent of the total instructional time for the entire school year in each of the four experiment groups–well below what most of the products were designed for. So it’s no wonder, ed-tech advocates say, that researchers didn’t see any tangible results.
To implement the study, volunteer teachers in each of the participating schools were randomly assigned either to use the products (the "treatment group") or not (the "control group"). While the study worked to ensure that teachers received appropriate training and that technology infrastructures were adequate, "vendors, rather than the study team, were responsible for providing technical assistance and for working with schools and teachers to encourage them to use products more or use them differently," the report said.
"Teachers could decide to stop using products if they believed products were ineffective or difficult to use, or could use products in ways that vendors may not have intended. Because of this feature of the study, the results relate to conditions of use that schools and districts would face if they were purchasing products on their own."
What is absent in this description of the research is any recognition that leadership also is a key component of school technology success. In designing a study that aimed to recreate "conditions of use that schools and districts would face if they were purchasing products on their own," the study merely confirms what ed-tech experts already know: that inserting technology into the classroom without the proper leadership and support won’t do any good.
"It is important to remember that educational software, like textbooks, is only one tool in the learning process. Neither can be a substitute for well-trained teachers, leadership, and parental involvement," said Keith Krueger, chief executive officer of the Consortium for School Networking, in a statement.
"This study failed to address several key pieces that other research and educators strongly agree are critical to the success of any efforts to transform teaching and learning," Wolf added. "Strong leadership is needed to encourage the correct use of technology, provide support throughout, and systemically integrate the use of technology for instruction. Integrating technology is much, much more than putting a piece of software into a classroom … As the study purports, it addressed a very narrow piece of educational technology; but more importantly, the study did not include critical components known to be essential for the successful integration of technology–or any other reform effort in transforming education."
Wolf pointed to
The NC IMPACT program, which was studied through a federal evaluation grant from ED, provided teachers and students with the hardware, software, connectivity, personnel, and professional development to create a 21st-century teaching and learning environment that ultimately affects student achievement. Students in IMPACT model schools, while originally behind their peers in math and reading end-of-grade test scores, caught up to and surpassed these comparison students during the first year of the grant and maintained that lead at the end of the second year of the grant, Wolf said. (See "Shared leadership makes an IMPACT in
And that’s just one example of an educational technology project that has resulted in greater student achievement, other federally funded research suggests.
In an interview with eSchool News, the study’s designers defended their methods.
"This was a very well-done study, there are no flaws in it, it had the full engagement of the software developers, and a great deal of attention was given to training and support of the teachers," said Phoebe Cottingham, commissioner of education evaluation and regional assistance for the
The report was based on schools and teachers who had not used any of the software products in question before. ED has extended the study for a second year to determine whether the software is more effective when teachers have had more experience using it. The department hopes to release the second year’s results next spring.
"We’ll be very interested in what the analysis produces a year from now," Cottingham said.
The lead researcher for the study was Mark Dynarski of Mathematica Policy Research Inc., an independent, for-profit research organization based in
Cottingham and Dynarski both said it’s difficult to compare this study’s findings to other studies that found technology has increased test scores, because those other studies involved different control groups, methods, and other factors that might have had a different impact on the results.
"I think it’s a signal of how important it is that people really use the most rigorous designs as soon as possible, because they can be fooled into thinking things are happening [that aren’t]," Cottingham said of the research.
Cottingham did say one thing that educational technology advocates would agree with: Observers shouldn’t jump to conclusions based on the results of this study alone–something ed-tech advocates hope members of Congress will consider, too.
"I think it’s premature to draw any kind of conclusion [about technology’s impact on student achievement]," Cottingham said. "This is the biggest, most rigorous study that’s been done–but we don’t feel we’re done yet, and the rest of the world shouldn’t consider that we’re done."