Sean Brophy, a researcher from Vanderbilt University’s Learning Technology Center, has created a computerized test for students in grades K-12 that he claims is more effective than the standardized paper and pencil test.
Brophy is working in collaboration with the Center for Innovative Learning Technologies on a grant funded by the National Science Foundation to develop computerized learning assessment programs that will expand the types of assessments that can be done.
Current standardized tests measure memory, but scientists want to be able to measure other cognitive skills they say cannot really be evaluated by these types of tests.
Brophy’s computerized assessments aim to provide a more authentic measure of what kids know, how they determine things, and how they apply information in a dynamic process. These tests also can monitor students’ decision-making process on more complex problems, Brophy said.
Nora Sabelli, senior program director at the National Science Foundation, explained, “The standardized test is a useful evaluation, but one type of assessment doesn’t have to do it all. We’re measuring memory now, but it’s not the only thing we should measure. We need more ways to assess conceptual learning, to monitor development, to know what students think, and to help build on their individual knowledge. We want to see how students can apply knowledgenot just what they rememberand computerized tests can help us.”
Besides creating tests that can assess a greater span of cognitive processes, another important purpose of the grant is to develop tests that help students learn as they take them. “We want to help students become better learners and gain more knowledge of content areas,” Brophy said.
In order to ensure achievement of learning objectives, the computerized assessments are designed to probe more than factual recall. Thus, the tests ask open-ended questions about causal relations to measure higher-level cognitive skills than those measured by a standardized test.
Each test provides a simulated tutoring session in which students assist a fictional character (e.g., Billy) in tutoring others. Since Brophy’s current research pertains to fifth graders who are studying river ecosystems, a typical test question might be: If you increase the amount of algae in the water, what would you expect to happen?
At this point, students would advise Billy based on Billy’s response to the question. For example, if Billy says, “Due to less oxygen, the fish will die,” the students have to decide if Billy is correct or whether he should learn more before responding.
If students think Billy needs to know more, they can look for resources on this web-based system by following different internet links that accompany the test. Thus, the students can gain more knowledge in order to advise Billy.
Once students decide they have enough information to assist Billy with his tutoring, they would choose from multiple choice answers. Through this process, students learn more about the content and develop critical thinking skills as they make decisions about which resources to tap to get the information they need, Brophy said.
While developing these computerized assessments, researchers have seen positive results in their studies. Brophy’s findings show a correlation between how well students do on these tests and how well they do on interviews, as well as more qualitative types of tests. The tests also are good predictors of a student’s ability to explain, Brophy said, and they indicate how well students understand the concepts.
Brophy believes that students eventually will take more tests on computers as technology continues to infiltrate the classroom and as more testing software is developed.
There are still a few problems that need to be worked out before computerized tests like the ones Brophy is working on become mainstream to K-12 education. For example, the cost of the technology makes it inaccessible to many schools at the moment. Also, researchers still are brainstorming on test formats and topic areas for open response-type questions. They also need to develop all the links for the testing web sites.
In addition, researchers need to improve the ability of computers to score essays. Current technology is only about 80 percent accurate in measuring students’ content knowledge from essays, Brophy said.
Still, researchers believe computerized assessments may ultimately provide a more complete picture of students’ knowledge and ability to learn, and schools can look forward to seeing a lot more of these dynamic assessments in the next few years.
Vanderbilt University Learning Technology Center
Center for Innovative Learning Technologies
National Science Foundation