As policy makers intensify scrutiny of the standardized testing industry in the wake of revelations about scoring errors on the October SAT, a high school senior whose SAT was incorrectly scored low is suing the board that oversees the exam and the testing company that was hired.

The lawsuit, filed April 7 in Minnesota, is the first since the March announcement that 4,411 students got incorrectly low scores and that more than 600 had better results than they deserved on the October test.

It names the nonprofit College Board and for-profit Pearson Educational Measurement, which has offices in Minnesota’s Hennepin County.

“Any type of a high-stakes test that impacts a life event like college, scholarships, and financial aid has to be scored with 100-percent accuracy,” St. Paul attorney T. Joseph Snodgrass said April 8. “There is no room for error in this type of a situation.”

Pearson spokesman David Hakensen said the company won’t comment on pending litigation. College Board spokeswoman Chiara Coletti also declined to comment.

The lawsuit, filed by attorneys for an unidentified high school senior in Dix Hills, N.Y., seeks class-action status. Lawyers want to allow anyone who took the test in October, except those who got a marked-up score, to join the lawsuit.

The suit seeks unspecified damages, an order requiring adjustment of the inflated scores, and a refund of the test fee.

Test-takers whose scores were made too low had their results corrected, but the College Board has declined to fix the inflated scores. That has angered some college officials who say they could unfairly influence admissions and scholarship decisions.

The SAT is taken by more than 2 million students and used by many colleges as a factor in admissions. The 2,400-point exam measures reasoning skills in reading, writing, and math.

The October test was taken by nearly a half-million students, so the error affected less than 1 percent of the results. The College Board maintains most were off by 100 points or less, but some students saw much wider swings.

Pearson has said the culprit might have been excessive moisture that caused answer sheets to expand and some marks to be unreadable. The error was discovered when the College Board asked the company to hand-score some tests.

Snodgrass’s firm won a $7 million settlement from Pearson in 2002 for scoring errors in Minnesota that affected more than 8,000 students, some of whom missed graduation ceremonies after being told they failed a state-required exam.

The lawsuit alludes to the Minnesota mistake and others in alleging that Pearson has taken shortcuts.

“The College Board contracted with Pearson despite the fact that Pearson is no stranger to botching test scores,” the lawsuit reads.

Experts say mistakes are inevitable in any operation on the scale of grading millions of tests. Still, the episode has sparked wider discussion about both college entrance exams and the growing number of high-stakes, state-level exams: Just how much risk of error is tolerable when students’ futures are at stake?

Recent years have seen a number of scoring errors on state-level tests, as well as graduate school exams like the Graduate Management Admission Test. Besides Pearson’s earlier Minnesota mistake, 4,100 people were incorrectly told by the Educational Testing Service they failed a teacher licensing exam in 2003 and 2004.

Critics of standardized testing seized on the error as confirmation that the testing industry dominated by CTB/McGraw-Hill, Harcourt Assessment, and Pearson is stretched too thin for the public’s good.

A recent report by Education Sector, a Washington, D.C.-based think tank, portrayed a highly competitive industry facing huge pressure from its biggest clients–the states–to cut costs and deliver results quickly. That time pressure is sometimes reinforced by contract provision for financial penalties if scores are late coming back.

Since 2000, Pearson reportedly has increased its number of scanners by 66 percent, added 60 percent more processing space, and increased its report printing capacity 45 percent.

Scott Marion, vice president for the New Hampshire-based National Center for the Improvement of Educational Assessment, said companies like Pearson are improving their processes, but the increased demand and time pressure might be negating the progress. In any case, he said, perfection is impossible.

“You won’t see this mistake from Pearson again, but you’ll see a different mistake,” Marion said. “As long as you have humans involved, you’re going to have some mistakes.”

Even technology is fallible. Kansas school officials are concerned about the reliability of the scores of some of the 4,900 students who were taking tests online in March when the computer system slowed to a crawl.

Of those students, 2,200 were taking the Kansas State Assessments and the rest were taking practice tests, which help teachers evaluate their students’ strengths and weaknesses before the main test.

John Poggio, director for the Center for Educational Testing and Evaluation in Lawrence, Kan., where the Kansas assessments are developed and graded, said the slowdown lasted for less than a half hour on March 3.

An unknown number of students were dropped from the system, and other test-takers were forced to wait an unusually long time for the program to respond after they selected answers.

When the problems occurred, the center stopped additional students from beginning the tests, Poggio said. That frustrated some teachers who had planned their day’s schedule around the tests and spent days prepping their students.

Diane DeBacker, director of school improvement and accreditation for the state, said the state sent an eMail message to school district testing administrators in late March. She said students who were worried that the technical problems caused them to fare poorly on the test will be allowed to review their tests and make changes.

After the slowdown, four different schools, which had planned to take the assessments electronically, requested pencil-and-paper tests, Poggio said.

Currently, about half of the students in the state take their tests online. The advantage of online testing is that results are available to teachers almost immediately.

As a result, some teachers would refresh their computers repeatedly as students finished testing to check the youngsters’ results, adding more demand on the system, Poggio said.

Links:

The College Board
http://www.collegeboard.com

Pearson Educational Measurement
http://www.pearsonedmeasurement.com

Education Sector
http://www.educationsector.org

National Center for the Improvement of Educational Assessment
http://www.nciea.org

Center for Educational Testing and Evaluation
http://www.cete.ku.edu

CTB McGraw-Hill
http://www.ctb.com/

Harcourt Assessment
http://www.harcourtassessment.com