Schools should conduct a careful study of their technology programs to determine what’s working and what isn’t. That was the message contained in a two-day conference hosted by the U.S. Department of Education (ED) in July, titled “Evaluating the Effectiveness of Educational Technology.”
The conference, held July 12 and 13 in Washington, brought together about 350 federal, state, and local education officials to discuss ways that school districts can measure how well their technology plans are being used to improve educational outcomes in order to protect their investments.
“We must not assume everything that employs technology is going to be successful,” said Education Secretary Richard Riley in his opening remarks. “That is why evaluation is so important.”
The conference marks a shift in focus for an administration that has led the push for increased technology spending in schools.
Figures from the department’s National Center for Education Statistics show that internet access in public schools has more than doubled in the past five years, from 35 percent of schools in 1994 to 89 percent in 1998. And according to the market research firm Quality Education Data, annual K-12 technology expenditures in public schools have more than tripled this decadefrom $2.1 billion in 1991 to $6.9 billion this year.
But the increase in spending has led to calls for more accountability, especially from critics in Congress who want to ensure that the money is being spent wisely. And ED officials have been listening.
In an interview with eSchool News, Linda Roberts, special adviser to President Clinton on educational technology, said the administration’s focus has shifted this year from getting technology into the classroom to helping schools use it effectively to meet educational goals.
“We’re calling for long-term studies to research the impact of technology on today’s learners” to find out what works and what doesn’t, Roberts said.
Part of the problem in addressing critics’ concerns about technology is that much of the evidence of success that we have now is anecdotal, not empirical, she said. To help solve the problem, the administration’s plan to reauthorize the Elementary and Secondary Education Act (ESEA) would let the secretary of education set aside up to 4 percent of the total Title III technology funding to support future evaluation projects.
Tools for evaluation
At its July conference, ED highlighted two recent studies as examples of the kinds of evaluations that are needed.
In Idaho, a statewide evaluation of technology initiatives, which have pumped about $200 million in public and private foundation grants into technology for the state’s public schools, produced positive results. The studies showed improved scores on standardized tests given to Idaho eighth- and 11th-graders who have been introduced to technology in the classroom.
And in West Virginia, an analysis of the state’s Basic Skills/Computer Education Program correlates the use of computers in grades K-6 with higher scores on standardized tests that measure basic reading, math, and language arts skills (See “West Virginia study links technology to student achievement,” May 1999).
State education officials were called upon to conduct similar evaluations of their states’ technology programs. But local studies also are needed to ensure that technology is being used wisely, ED officials said. School leaders were challenged to develop their own evaluation plans, and a host of resources were on hand to help get them started.
Leading researchers and evaluators, including Professors Dale Mann of the Columbia University Teachers College and Charol Shakeshaft of Hofstra University, the researchers who led the West Virginia study, talked about the methods and criteria for such studies in a panel discussion.
One piece of advice that emerged from the panel: School districts should team up with higher education institutions or other research facilities to help them develop measurable goals and sound techniques to their investigations.
Researchers also presented white papers to help schools design evaluations. Walter F. Heinecke, assistant professor at the University of Virginia’s Curry School of Education, presented a paper called “New Directions for Evaluation of Technology and Student Learning,” in which he outlined the questions school leaders should ask themselves in developing an evaluation.
Heinecke’s paper, the two state studies, and other resources are available on the conference’s web site.
“Evaluating the Effectiveness of Educational Technology” conference