The dictionary definition of “assessment” does not include the taking of high-stakes final exams. But that’s how the word is being increasingly used. In the context of schools’ legal requirements to meet Adequate Yearly Progress goals, assessment means accountability as defined by the reporting requirements of the No Child Left Behind Act.

For some people, this is a strategic opportunity to gain support for technology as an indispensable tool for test data collection, analysis, and reporting. And these people are correct. Leaving aside the question whether these tests actually measure what we need to know about creating future generations of well-informed, caring, and productive human beings, the nationwide pressure for accountability seems to be one of the few remaining drivers of technology investment that has survived the transfer of government money from education to tax cuts and war.

However, for those of us primarily motivated by the ability of technology to enrich teaching and learning, this is a much more ambiguous opportunity. Not only do we want assessment to mean something other than a way of revealing–and punishing–failure, we’re not sure that assessment is the most important contribution technology can make in the first place.

I recently attended a forum on technology-based assessment at a national conference. The podium speakers correctly noted that technology makes testing more efficient. It cuts the turn-around time so teachers can get results more quickly. It facilitates disaggregation and submission of data. They talked about the coming use of technology for test distribution and test taking. These are exciting and powerful uses of technology that will stabilize its place in the education system.

In my own work as executive director of Mass Networks Education Partnership, I’ve learned that using technology to examine high-stakes test data can be a powerful stimulus for change. Every district we work with has a set of local “myths” that everyone believes, but not all of which are actually true. Often, there is a widespread belief that a particular group of students is dragging down the overall test scores–the special-education kids, the low-income kids, the kids who transfer into the district from neighboring towns, the non-white kids. It’s always someone. Once the data are examined, however, the negative impact of the “problem group” often turns out to be much less significant than generally believed–or simply untrue. By dissecting aggregate numbers into subgroup trends, technology-based data analysis can undermine myths and dispel stereotypes.

We’ve also learned that data from high-stakes tests are best used to analyze group trends over time. This type of data is, as most teachers know, much less useful for revealing individual students’ needs. These summative tests need to cover too much ground to delve into any particular area in the depth required to really expose a particular student’s learning style, strengths, and weaknesses. In addition, any one student’s scores will fluctuate on any one day based on numerous factors having little to do with what the student actually knows or is able to do. It is only by aggregating large numbers of separate scores that the distortions produced by arbitrary fluctuations are reduced enough for valid insights to emerge.

At the same time, it is our experience at Mass Networks that teachers fundamentally think in terms of the needs of individual students, not demographic groups. Our society, most parents, and much of an education school’s curriculum demands that teachers think about each child as a distinct person. Thinking of groups, of trends, of systemic patterns does not come easily to most educators.

Perhaps this is part of the reason that, when the audience at the forum got a chance to speak, they wanted to take back the concept of assessment from the push for summative evaluation. Instead of making high-stakes testing more efficient, they wanted technology to help them with up-front diagnosis of students’ unique learning styles and needs. The forum participants didn’t want any more costly, time-consuming big events. They wanted technology to help with the small, day-to-day, incremental steps that are the reality of classroom life.

The official speakers at the forum stated that they were beginning to work backwards from the existing high-stakes tests to create technology-facilitated, fully aligned learning tools that would provide the kinds of information that teachers need and the kind of support that students require to do well on the tests. But many forum participants were not satisfied with the “pre-test” and “post-test” strategies that some of the podium speakers described as their first step in the desired direction. Even more fundamentally, some of the forum participants felt that using a high-stakes test–or even the process of preparing for such a test–as the starting point will never lead to the kind of diagnostic learning tool they want. Tests and diagnostics conceptually overlap but are very different things in practice. It’s like trying to use a hammer to drive in a screw–it may go in, but the wood is not in good shape when you are done.

At a deeper level, based on our work with schools, I suspect that the teachers’ deepest frustration wasn’t ultimately about the lack of diagnostic tools. Most experienced teachers quickly gain a solid sense of each student’s strengths and weak points. In this time of raised expectations, what teachers desperately need is help figuring out what to do for that child and then help doing it: how to build on a student’s strengths to address her weaknesses in a way appropriate to the child’s learning style; how to help students master not only the basic academic skills but also the personal attributes that lead to success; how to concretize the abstract concepts, problem-solving strategies, and analytic approaches that are the real foundations for academic success. Teachers know where we are all starting from and, on the whole, accept that we have to move to another place. What they desperately want from educational technology is a bridge to help them and their students cross over the chasm that separates the two locations.

It’s a tall order to provide prescriptive guidance in a way that honors teachers’ wisdom and experience. It’s going to be difficult to develop software that supports embedded, ongoing, formative evaluation that gives meaningful information about each student, guides day-to-day instructional decision-making, and also serves as a learning process for the students. But it is what educational technology must aim for. Moving in that direction also will help reclaim assessment as part of a process of evidence-based instructional decision-making and student learning–what some researchers call “assessment-centered instruction.” And helping enrich teaching and learning is why most of us got involved in this field in the first place.

Steven E. Miller is executive director of Mass Networks Education Partnership, a nonprofit consulting group that works with education leaders on curriculum, strategic planning, and technology integration. Mass Networks is partnering with the Consortium for School Networking on its new national initiative, Cyber Security for the Digital District. Miller can be reached via www.massnetworks.org.