The scientifically based research (SBR) requirement of the federal No Child Left Behind Act (NCLB) is slowly changing the way schools approach new learning solutions. It’s also changing the way companies market their products to educators. But while everyone agrees the provision’s intentions are good, the law has created a host of new problems its authors never anticipated.
Educators, policy experts, and industry heads who spoke with eSchool News all agreed that a cloud of confusion still exists around the SBR portion of the law. And while parties on both sides have struggled to make the necessary adjustments, some school leaders contend that finding proven solutions often amounts to a blind leap of faith in favor of strategies that simply sound good.
At its best, the provision’s staunchest supporters contend, SBR will lead to a paradigm shift in education, where a research-based approach to learning eventually will elicit a level of accountability equaled only in the medical field. At its worst, critics say, the law offers too little guidance and asks students to play the role of guinea pigs in a disruptive chain of control-based research experiments that would serve only to reinforce the line between the haves and the have-nots in the nation’s schools.
In short, the law specifies that all federally funded education initiatives deployed in grades three to eight must be proven effective by way of “scientifically based research.” So if a school district uses federal grant money to purchase reading software, for example, the software in question must be proven to work through rigorous analysis. The same holds true for math and science software, and so on.
But what, exactly, constitutes rigorous analysis? Unfortunately for educators, opinions vary. Mark Dynarski of Mathematica Policy Research Inc., an independent research group tapped by the U.S. Department of Education (ED) in October to begin evaluating the effectiveness of educational technology initiatives in the nation’s schools, says the most effective form of research is what’s known as random assignment.
Dynarski says service providers can test the effectiveness of their products by conducting control-based experiments on two like groups of students to demonstrate the products had a direct impact on student test scores, for example.
But for many school leaders, the control approach–while effective–raises serious political and ethical concerns. To conduct such research, educators would have to give the technology or learning solution in question to a single group of students within the school, while denying other students access to the same potential benefits.
Many people who spoke with eSchool News agreed the dilemma has resulted in a standoff between companies looking to put the evidenced-based seal of approval on their products and educators who ask: What’s in it for us? While some schools have requested that the companies leave their products behind when they finish, others have demanded their corporate partners pay for staff development or, in some cases, reportedly buy their way into the schools the old-fashioned way: with cold, hard cash.
Though he has yet to hear of any schools asking for money in exchange for serving as a test bed for corporate research, Russ Whitehurst, director of the Institute of Education Sciences at ED, said it’s not uncommon for schools to ask for something in return.
“I have heard about this dance that goes on as [companies and schools] try to develop these partnerships,” he said. “I think there needs to be a quid pro quo.”
“It’s definitely a problem,” agreed Mark Schneiderman, director of education policy at the Software and Information Industry Association, which has been working to help software providers understand the law. For the research-based approach to be effective, Schneiderman believes school leaders need to adjust their way of thinking when it comes to issues of equity and begin looking at control-based experiments not as impediments to learning, but as pilot projects used to explore new possibilities and potential best practices for the classroom.
“The greatest issue of concern is the amount of time and resources that would be needed here,” he said. “This is, in many ways, a new thing for educational technology. You’ve got what amounts to a huge learning curve for everybody.”
To help schools better understand the value of controlled research, ED has tapped Mathematica to conduct a national study of 16 computer-based reading and math products from 12 different companies, developed to enhance the learning of reading in grade one, reading comprehension in grade four, pre-algebra in grade six, and algebra in grade nine. The study is part of a three-year, $10 million contract the department inked with Mathematica using money provided through NCLB.
The study will provide information for policy makers and educators about how educational technology can improve student achievement in reading and math, as well as the conditions and practices under which the technologies are most effective.
Teachers will be trained to use the products, which will be demonstrated in schools during the 2004-05 school year, with achievement gains reported at the end of the year.
But results of the study aren’t expected until April 2006 at the earliest, Dynarsky said. That’s because research, like any form of analysis, takes time–something school leaders say they have little of, considering the law calls for them to make these changes now.
In the meantime, many say what works under the law is open to interpretation.
ED officials “don’t know what they mean in the law…or least they do not clarify what they want,” said Marc Liebman, superintendent of the Marysville Joint Unified School District in California. “I don’t believe they want [every piece of software] tested in rigorous field tests, but that the strategies used be based on tested techniques. Having done statistical testing of instructional approaches, this is a multiple-year, multiple-sample process. For each textbook or instructional material to go through this [process] is not logical, realistic, or possible.”
What the administration wants, Liebman believes, is for schools to choose “materials that are developed around proven strategies. We therefore look to the existing body of research from colleges and universities on which to base curriculum and professional development.”
But not everyone sees it that way. “My understanding of scientifically based research is that there be random sampling,” said Ken Eastwood, superintendent of the Oswego City School District in New York. “That is nearly impossible. Have they revised the definition to [say] something less stringent? If so, there are a lot [of] people waiting to hear. Otherwise, no one is going to meet the criteria.”
While control-group testing would certainly be the preferred mode of analysis for deeming a product “scientifically based” under the law, ED’s Whitehurst says the department recognizes that, to date, the body of research for products tested under that level of rigor is virtually non-existent.
The scenario is perhaps best for reading programs, Whitehurst said. But the criteria for meeting the law’s requirements are even more ambiguous when it comes to validating math and science options. Sometimes educators “have to make a best guess” based on the data that are available, he concluded: “It’s not as clear as we would like it to be. But that’s the reality of it.”
In the event that high-quality, control-based data do not exist, Whitehurst said, school leaders should turn to products that employ the strategies and meet the guidelines proposed by expert panels. For example, when selecting reading software, educators should, at the very least, make sure the solution is in line with the “five pillars of reading” as outlined by the National Reading Panel.
But Dynarsky said he’d be leery of products that cling to talking points without the actual classroom-tested evidence to support such claims. If the product doesn’t have classroom-based evidence that shows a marked improvement in student achievement, then “it doesn’t tell me anything,” he said–“except maybe that you’ve been reading the literature that’s out there.”
Some companies say they’ve enjoyed success in getting schools to buy into the test-bed approach. But it hasn’t been easy.
Marcy Baughman, director of research for Pearson Education’s K-12 School Group, which includes the Scott Forsman and Prentice Hall publishing divisions, said the company currently is engaged in 21 evidence-based studies of its products in schools across the country.
To reduce the anxiety that sometimes follows large corporations into the public schools, Baughman said Pearson turns to third-party researchers to conduct the experiments, with the understanding that researchers be as sensitive to the educational institution as possible.
“One of the easiest ways to get the districts to feel comfortable is to let them know that you are willing to share the information [with them],” Baughman said. “We’re not testing their students. What we’re testing, really, is our product.”
Pearson also plans to share the information it gathers from its on-site evaluations with officials at ED’s What Works Clearinghouse (WWC). Founded in August 2002, WWC is intended to provide a repository of scientifically proven teaching practices for educators, policy makers, and the general public. The results from the Mathematica study also will reside there, officials said.
Sloane O’Neal, vice president of marketing for educational software provider CompassLearning, said she gets the feeling resistance from the school community is waning.
“Initially, after NCLB was passed, there was some confusion in the marketplace,” she said. “But people have made it overly complicated, I think.”
Yet, for all the good research does, it can be an expensive process–and one that might preclude smaller software companies from being able to compete in the new market climate. CompassLearning alone spends “several hundred thousand dollars a year” on such projects, O’Neal said.
Perhaps more trying is the amount of time each project takes to complete.
Though the research-based approach can lead to hard evidence of student achievement, O’Neal said she’s heard grumblings that the extensive nature of such projects also might limit the number of solutions companies are able to roll out to schools, simply because of the additional time and money they have to spend on research and development of new products.
“It would stifle innovation if we had to spend years testing the product before we could launch it,” she said.
But while confusion still reigns over how to comply with the SBR provision of the law, few educators deny the benefits of incorporating proven educational strategies in the classroom.
Dave Craven, instructional technology director for the Cherry Creek Schools in Greenwood Village, Colo., said anytime a product has a body of evidence to stand on, it makes an educator’s decision to use that solution “just that much more sound.”
U.S. Department of Education
Mathematica Policy Research Inc.