A modern, complete, scoring of multiple-choice tests that prepares students for current standardized tests is now available from Nine-Patch Multiple-Choice, Inc. Students need practice using higher levels of thinking as well as knowing something about a subject. Teachers need meaningful, useful feedback.
Education experts want a test that is "…reliable, accurate, and valid relative to intended uses." It must "reveal students´ conceptual understandings and … misconceptions." They complain that "… educators … come to faulty inferences about what students do and do not understandand why." and "…what to do in response to students´ poor performance often is a conundrum." These are the problems with right-mark only scoring.
The solution is simple. Score what is important. Score both knowledge and judgment rather than count right marks.
Value knowledge and judgment equally. That starts the test score at 50% rather than at zero.
Students receive credit for marking right answers for what they judge they know and can do. Students also receive equal credit for not marking wrong answers. Each question requires and rewards the use of higher levels of thinking. A student must first determine if the question can be used to report something he/she knows or can do. Secondly, the right answer must be marked or no wrong answer marked (good judgment).
The test now measures quantity and quality. The test scores are meaningful. There are few if any wrong marks. (There is no way to know if a wrong mark is what a student actually believed was right, a lucky guess, or "the best answer" for unknown reasons on any multiple-choice test).
A test score above 50% indicates the student knows and knows he/she knows. This student can build further learning on this foundation. A test score below 50% indicates the student does not know and is not aware of what he/she does not know. This student needs a teacher to help start from the beginning.
Right-mark scoring is appropriate for sorting out mastery students, scores of 90% or higher. It is seriously flawed at the pass/fail point as half the right marks can be from chance alone. Right-mark scoring performs the poorest at the point accurate data are needed most.
Knowledge and Judgment Scoring corrects this problem by producing two independent scores: knowledge and judgment or quantity and quality. A knowledge and judgment test score of 60% can also have a quality score of 100% (no wrong answers or poor judgment). Both the student and teacher know what is known and what is to be learned.
The test score makes sense as it is based on what the student knows, values, and finds useful. The same right-mark score makes no sense at the pass/fail point as it is based on some knowledge, forced guessing and chance.
Because the test is responding to what students judge they know, the questions can be grouped by student performance into four groups rather than just easy and difficult:
1. Easy They know and know that they know.
2. Misconception They believe they know, but do not.
3. Discriminating Those who know mark a right answer, and those who do not, do not mark a wrong answer.
4. Difficult They know they do not know.
This produces a better sense of class performance than obtained from an essay test in classes of 20 or more students. Student counseling matrixes relate student, question, class and test performance in Windows software Break Out Plus and Power Up Plus.
You can edit Break Out source code, using Excel Visual Basic for Applications, to your scoring needs. Break Out Plus scores and cheat checks. Both are free at www.nine-patch.com. Power Up Plus includes the Test Performance Profile for advanced educators interested in fine tuning their instructional system, $29.95, single user. Site licenses are available.
# # #
Evaluation Copy Available on Request
Reference: Herman, Joan L., Baker, Eva L. and Linn, Robert L. Fall 2006. Assessment for Accountability and Learning at http://www.cse.ucla.edu/products/newsletters/clfall2006.pdf. CRESST LINE, Newsletter of the National Center for Research on Evaluation, Standards, and Student Testing. The CRESST Conference is Jan 22-23, 2007 at UCLA, www.cresst.org.