Computer-based testing can be an effective way to measure so-called “21st-century skills” such as the ability to solve problems and synthesize information, according to a recent federal report.
Funded by the National Center for Education Statistics, a division of the U.S. Department of Education, the study looks at two different computer-based scenarios for measuring students’ scientific skills on the National Assessment of Educational Progress (NAEP), commonly known as the Nation’s Report Card.
The study concludes that computer-based testing holds promise for measuring higher-order thinking skills that cannot be measured easily via traditional pencil-and-paper exams–a finding that is sure to resonate with advocates of teaching 21st-century skills in classrooms.
However, one of the researchers who wrote the report concedes the United States is probably at least five years away from adopting computer-based testing on a more widespread basis in schools.
The report, called “Problem Solving in Technology-Rich Environments (TREs): A Report from the NAEP Technology-Based Assessment Project,” is based on a study of how more than 2,000 eighth-grade students from U.S. public schools performed in one of two computer-based testing scenarios administered in 2003: a search scenario and a simulation scenario.
Eighth-graders were chosen to participate with the assumption that they would have basic computer skills; basic exposure to scientific inquiry and concepts; and the ability to read scientifically oriented material at a sixth-grade level or higher.
The search scenario required students to locate and synthesize information about scientific helium balloons from a simulated World Wide Web environment, and it was designed to measure students’ scientific inquiry and computer skills. The simulation scenario required students to conduct experiments of increasing complexity about relationships among buoyancy, mass, and volume, and it was designed to assess their scientific exploration, scientific synthesis, and computer skills.
According to Randy Bennett, one of the authors of the report and a Distinguished Scientist in the Research and Development Division of Princeton, N.J.-based Educational Testing Service, these scenarios aimed to measure some of the key skills needed for success in college and the workplace.
“To be successful in a knowledge-based economy, individuals must be able to use computers to perform cognitive tasks–among other things, to search for and synthesize information from the internet, use simulations and modeling tools to answer what-if questions, and craft meaningful communications with text-editing and presentation tools,” Bennett explained.
He added: “The scenarios were engaging, highly interactive, and open-ended so as to capture skills not tapped by multiple-choice tests.”
The exams were delivered on school computers or on laptops taken into the schools, and the results suggest that the computer-based scenarios “functioned well as assessment devices.” Of particular significance, there was an “absence of gender differences” in the results, the report says, which was encouraging in light of the common stereotype that girls are less technologically proficient than boys.
Yet some “substantial differences” in the results did emerge among different racial or ethnic groups, socioeconomic groups, and groups with varying levels of parental education. The differences that appeared in these various groups could be “very worrisome if they exist beyond the two problem-solving scenarios used in the study,” Bennett said.
Still, he said, the most important result of the study was that 21st-century, higher-order learning skills can, indeed, be tested, and tested well–all within a computer-based test.
To Bennett and the other researchers, tests that involve problem-solving tasks such as these should be incorporated into federally mandated state testing under No Child Left Behind.
However, when asked whether educators can integrate this kind of 21st-century, computer-based testing in their schools, Bennett had this to say: “We are probably five to 10 years away from having online assessments substantially comprised of such tasks. We don’t yet have the development tools to make their production efficient; we don’t have all of the psychometric methods required for the analysis of student performance; and the schools don’t have the infrastructure to allow their widespread use.”
Still, Bennett offered hope, noting that the study was conducted in 2003, and the test was administered successfully to a wide representation of eighth-graders, meaning some schools, at least, had the necessary infrastructure to deliver it–infrastructure that likely has improved by now.
“Some states are moving faster than others, of course,” he said. “Oregon and Virginia are states that have large, well-established computer-based testing programs. We are not that far off.”
The TRE study is the last of three federal studies that explore the feasibility of delivering NAEP exams via computer. The previous two studies, Mathematics Online (MOL) and Writing Online (WOL), compared online and paper testing in terms of measurement, equity, efficiency, and operations.