A new study conducted by researchers at Boston College suggests that scores on tests taken with paper and pencil might substantially underestimate the achievement of students who are accustomed to working on computers.

The study compared how students with varying levels of computer skills take open-ended tests on computers and on paper. Language arts, math, and science tests were administered to sample groups of eighth-graders to determine whether students tended to do better on computerized tests or using traditional paper and pencil methods.

The study, titled “Testing on Computers: A Follow-Up Study Comparing Performance On Computer and On Paper,” was conducted by Mike Russell and Walt Haney of Boston College’s Center for the Study of Testing, Evaluation, and Educational Policy.

Russell and Haney drew a sample group of students from two Worcester, Mass., middle schools. Only students who had taken the seventh-grade SAT 9 standardized test were studied, so researchers had an indicator of prior achievement on which to base the results of their study.

The students were assigned to two groups. One group took the language arts and math sections of the test, and the other group took the language arts and science sections. The two groups were further subdivided so that half of each group was tested on the language arts section by computer, and half on the other section by computer.

Students participating in the study were required to fill out a questionnaire on prior computer use and take a keyboarding test. They were given the open-ended test with a limited amount of time to complete both the computerized section and the pencil and paper section.

The tests Russell and Haney used were fairly standard, and scoring guidelines were developed by the National Assessment of Educational Progress and Massachusetts Comprehensive Assessment System. All written test results were transferred exactly as written to typeface, so that handwriting would have no effect on the scoring.

The results of the study suggested that students who demonstrate computer competency (here defined as keying around 20 words per minute) do much better on open-ended language arts tests when they are given on a computer than when given on paper.

However, if the keyboarder was slower, using a computer for an open-ended language arts test adversely affected his or her performance.

Interestingly, math scores were lower when students used computers than when they used a pencil and paper. This is not surprising, since students trying to solve math problems would run into frequent formatting problems when trying to use a computer for their calculations, Russell said.

“As far as I see it, these results leave us with three options,” Russell said. “First, you can try to administer everything by computer, but that is not practical and not fair to some students. Second, we could use the old method, but we already don’t use computers enough to prepare kids. That’s not a good option. Or third, we can take state test scores less seriously, until we can allow for letting kids choose how they want to be tested.”

In the study, Russell concludes that educators should “exercise caution when drawing inferences about students based on open-ended test scores when the medium of assessment does not match their medium of learning.”

Boston College’s Center for the Study of Testing, Evaluation, and Educational Policy

http://www.csteep.bc.edu

“Testing on Computers” study

http://epaa.asu.edu/epaa/v7n20