As every farmer knows, carrots don’t grow too well if you yank them up every couple weeks to check on how they’re doing.

Much the same might be said of student assessment, a topic all over this issue. Testing kids too often and placing too much emphasis on standardized results can cause stunted growth, as most educators already know and as state officials, parents, and even—someday—TV reporters and real estate agents might eventually discover.

For my money, the genuinely important things in education happen in places such as classrooms and in the minds of students. Effective educators know what works for them (see page 16, for example), but that’s not good enough for most of us. Nah, we want a more generic recipe.

Fold in the following ingredients, shake vigorously, simmer for 12 years, and—voila!—out pops a high school graduate prepared to perfection, able to read and write passably well, not completely ignorant of our history and government, physically fit, well adjusted, and fully competent in rudimentary science and mathematics.

Unfortunately, the formula for successful education is like the recipe for bear stew, which begins . . . “First, catch a bear.” It’s a recipe easier to say than savor.

Nonetheless, the hunt proceeds, growing only more furious as time goes by. To discover what it takes to teach a child, we turn to research. Unfortunately, in education, what some call research often is ancecdotal evidence tricked out as scholarship.

Especially when it comes to research on technology, the validity of findings can be flawed, sometimes fatally, by the lag between when the studies were conducted and when the results are vetted, sanctioned, and finally published. For so-called “meta-analysis,” this problem is practically insuperable—at least it is, it seems to me, when the work involves technology.

Meta-analysis, as you probably know, is a research method reportedly proposed first by Gene V Glass, professor of Education Policy Studies and Psychology in Education at the Arizona State University College of Education. “Meta-analysis refers to the analysis of analyses,” Glass wrote in 1976. “I use it to refer to the statistical analysis of a large collection of results from individual studies for the purpose of integrating the findings.”

That’s the method apparently underlying the forthcoming National Science Foundation (NSF) study of technology’s impact on math and science instruction (page 14). According to a trailer of coming attractions (also known as an InfoBrief) put out by NSF, the study will find that drill-and-kill software produces better student outcomes than more sophisticated technological approaches, such as computer simulations.

Baloney, I say.

Oh, I have no quarrel with the researcher. I don’t doubt for a moment that his findings are the proper conclusions to draw from an “analysis of the analyses.” It’s just that the analyses being analyzed here are years out of date. To be fair, the researcher himself points out this problem.

The study talks about student outcomes following exposure to Integrated Learning Systems (ILS). Right there, that should shoot the warning ensign straight up the old flagpole. Informed educators stopped using the term “ILS” in the mid-1990s. According to the footnotes accompanying the InfoBrief, the most recent study analyzed was conducted in 1996, and most of them were done before George Bush, the elder, lost his bid for reelection.

The antiquated nature of the underlying work should ensure the conclusions of this new study are nothing more than a historical curiosity. What was true at the beginning of the last decade has about as much relevance to technology-enhanced instruction today as monastic illumination has to the Saturday morning cartoon shows—except in this case the progression is from bad to better rather than vice versa.

Consider the state of technology in 1990: Tandy Computers were an economical alternative to pricey IBM models, and a common debate was whether to buy an economical 286 model or one of the expensive 386s. Apple was selling a top-of-the-line MacIntosh with two megabytes of RAM and a 40-megabyte hard drive for just under $4,000. Windows 3.0 was just out and was threatening to drive DOS off the PC screen.

Once upon a time (1981, to be exact), Bill Gates of Microsoft uttered his now infamous pronouncement that nobody should ever need more than 640K of memory. That seemed to be a sensible thing to say at the time. Drill-and-kill tutorials once were the best a fledgling instructional software field could conjure up.

Nobody takes Gates’ prediction seriously anymore, including Bill. I just hope nobody—not even in some misguided, back-to-basics frenzy—will be tempted to take the outdated conclusions of this forthcoming NSF study seriously either.