science policy Hattie

Do education policy makers believe in science?


Legislative sessions in many states closed recently with new or updated laws that directly contradict volumes of robust scientific research.

John Hattie is an education researcher who changed the way we think about what works in the classroom. His meta-study, Visible Learning (2009), analyzed 50,000 studies of more than 80 million students. This seminal work, with its 2011 and 2015 updates, shined the light on the importance of visible learning and taught us what interventions were most effective in education. His research ranked almost 200 different factors according to their influence on student achievement and uncovered insights about relative effectiveness that should drive policy discussions for education stakeholders.

Yet, somehow, it doesn’t. Legislative sessions in many states closed recently with new or updated laws that directly contradict volumes of robust scientific research. If we continue to neglect existing evidence, we endanger our ability to make progress and compete in a global economy.

Moving the Needle on Improving Results

In order to understand how potentially damaging uninformed education policy can be, we need to understand what Hattie uncovered about relative effectiveness.

First, it’s worth noting that his work is a meta-analysis, which is a statistical method used to combine the results of several studies. The result is a more robust measurement of impact or effect size. With his 2015 update, Hattie’s meta-analysis contained almost 1,200 existing meta-analyses conducted by other researchers. This allowed him to make far more comprehensive conclusions than any previous researcher about how different factors influence student achievement and how they compare to each other.

When we look at Hattie’s full list of 195 factors, we find that almost everything causes students to make some amount of progress. Only four percent of the researched factors result in students knowing less at the end of the school year than they did at the beginning, which seems like great news.

If almost everything we can think of researching has a positive effect on student achievement, it should be fairly easy to increase our results, right? Not so fast.

Basing Policy Decisions on Education’s Existing Effect

Our teachers and classrooms already have an effect on achievement. The majority of students know more at the end of a school year than they did at the beginning. Thanks to the millions of students who have moved through our public education system, we now have lots of data about how much progress a typical student in a typical classroom will make and can compare that to the amount of progress our standards or curriculum indicate a student should make within a year.

For Hattie’s analysis, an effect size of 0.5 is equivalent to one year’s worth of progress through curriculum. The average classroom has an effect around 0.4. Imagine a typical student just beginning a new grade. The table below illustrates effect size and growth:

Effect size Impact
0 Zero growth
0.4 Average growth
0.5 One year’s growth
1.0 Two year’s growth

Policy decisions should be based on the average effect education is already having. We can’t base decisions on whether a factor has any effect because the relative efficacy of educational activities is what’s important.

(Next page: Popular polices on the effectiveness scale)

How Popular Policies Measure on the Effectiveness Scale

Additionally, effect isn’t cumulative. We can’t take a 0.4 factor, add a 0.2, and get a classroom with an effect of 0.6. It’s more like an average. Combine a 0.2 factor with a 0.4 and you’ll end up with something closer to 0.3. (That’s an oversimplification, but works for the gist.) So if we’re considering ways to improve achievement, we should discard anything that doesn’t measure up to what we’re already doing. If it’s not more positive than our average, we actually slow student growth. So how do popular policies rate on the scale? Here’s a sample:

Low effect   Average effect   High effect
Class size 0.21 Phonics 0.52 Collective teacher efficacy 1.57
Charter schools 0.07 Small group instruction 0.47 Response to intervention 1.07
Retention -0.17 Goal setting 0.40 Frequent feedback 0.73

Class Size

Most states have regulations around class size, so the $37 billion Florida has invested in class size reduction makes intuitive sense. If teachers have fewer students, each one will get more attention and they’ll make more progress. However, more isn’t necessarily better. If a little math fact drill ‘n’ kill is good, a lot is great, right?

All other things being equal, without changing what activities are happening in the classroom, a 30 student class won’t achieve any more than a 20 student class. If we want students to benefit from reduced class size, we need to offer schools more professional development so teachers can use their newly freed up time to improve their practice.

Charter Schools

What about charter schools? Federal budget proposals would increase charter-specific spending to $500 million tax dollars for the next fiscal year. Champions of “school choice” advocate for  charter schools to increase local education options and point to data reporting that charter schools work, but a 0.07 effect (marginally better than zero growth) doesn’t support their argument. If these schools aren’t measurably better than what we’re already doing, instead of re-allocating public funds to build what might be duplicative infrastructure, we should look for more impactful ways to change our existing schools.

Retention

Retention laws, however, are arguably the most damaging policy we enact.

Sixteen states and Washington D.C. have mandatory retention laws for students who are not reading at grade level by the end of the third grade. Eight more states allow but don’t require it. Advocates for retention talk about “social promotion” preventing students from receiving the additional supports they require to improve their literacy skills, but research tells us there’s almost nothing we can do that would be more harmful to a student’s progress.  We know that students who are held back don’t learn more than their promoted peers—in fact, they often learn less both in their retention year and in the long-term.

If we continue to enact policies that haven’t improved student achievement and learning the way they’re promoted to, we won’t see the dramatic effects to education we would by focusing on driving the changes pinpointed by Hattie to be the most effective.

More effective drivers may be frequent feedback or response to intervention. Science doesn’t lie, and legislatures across state lines will continue to stunt our progress if they don’t consider the distinct scientific evidence and let that influence policy. For now, we’re left asking the question, “Do education policy makers believe in science?”

Sign up for our K-12 newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Want to share a great resource? Let us know at submissions@eschoolmedia.com.

New AI Resource Center
Get the latest updates and insights on AI in education to keep you and your students current.
Get Free Access Today!

"*" indicates required fields

Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Email Newsletters:

By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

eSchool News uses cookies to improve your experience. Visit our Privacy Policy for more information.