As school districts from coast to coast deploy ever more technology to help prepare students for high-stakes exams, a new study by researchers at Arizona State University (ASU)—reportedly the largest such research project ever conducted—has concluded that standardized testing can be counterproductive and might actually lower academic achievement.

Researchers found that students in most states with high-stakes exams—such as Arizona’s Instrument to Measure Standards (AIMS), which tests achievement in various grades and soon will be needed for graduation—scored below the national average on advanced placement (AP) tests and college entrance exams.

The high-stakes tests also increase the dropout rate for lower-achieving students and result in less class time spent on art, music, science, social studies, and physical education, according to the study, which was funded in part by affiliates of the National Education Association (NEA). The NEA opposes high-stakes testing, a cornerstone of the Bush administration’s education reforms.

“The relative failure of high-stakes tests to achieve their intended purpose and their numerous negative consequences must be considered as America prepares to launch a massive testing program in the effort to improve our schools,” said study co- author David Berliner, an ASU regents professor.

The study—which came as state education officials prepared to submit their final plans for implementing key parts of the No Child Left Behind Act before the Jan. 31 federal deadline—is sure to raise the stakes in the already contentious debate over high-stakes testing.

To help students pass the exams, school districts nationwide are turning to a variety of technology-based solutions—from software that aligns the district’s curriculum with state and national educational standards, to supplemental instructional programs that drill students on the skills they need to improve.

Although some evidence suggests such measures can help raise student achievement on the state-level exams, the ASU researchers found the opposite often is true of students’ performance on national indicators of ability.

For example, after implementing high-stakes exams, twice as many states saw their students’ scores on the SAT and the ACT fall below the national average as those that saw students experience gains, the study found. And AP scores were worse than the national average in 57 percent of states with high-stakes exams.

Critics of the study say it’s unfair to use these national tests as indicators of student performance, because not all students take the tests.

In Arizona, incoming Superintendent of Public Instruction Tom Horne dismissed the study, saying the AIMS test should be used as a graduation requirement starting in 2006. He wants the required passing math score to be lowered, but he said the test adequately measures whether students have learned the necessary skills in high school.

“One of the purposes of the tests is to be sure we don’t graduate any people who can’t read their diplomas,” Horne said. “To teach only to the test is an overreaction, and it should be discouraged.”

The proper reaction by teachers, Horne said, is to teach better, not to reduce instruction in other subjects.

But Gabriela Gedlaman, who is the Arizona coordinator for FairTest, a group that opposes high-stakes testing, summed up the study’s findings by arguing the tests don’t achieve the right goal.

“Kids aren’t taught to think at a high level, to analyze things,” she said. “They’re just taught to do as they’re told, with no critical thinking skills involved.”

The study’s other co-author, Audrey Amrein, says she supported high-stakes testing before completing the research.

“In theory, high-stakes tests should work, because they advance the notions of high standards and accountability,” she told the New York Times. “But students are being trained so narrowly because of it, they are having a hard time branching out and understanding general problem- solving.”

See these related links:

ASU study http://www.asu.edu/educ/epsl/EPRU/ documents/EPSL-0211-126-EPRU.pdf FairTest http://www.fairtest.org