Why releasing teacher ‘rankings’ is a bad idea
Editorial—New York City on Feb. 24 became the latest municipality to release the “value-added” rankings of thousands of public school teachers, despite opposition from the city’s teachers’ union, which lost a lawsuit to prevent the release of this information. Here’s why the city’s move—and others like it—are seriously misguided.
According to the Associated Press (AP), the rankings track 18,000 math and English public school teachers from fourth through eighth grades over a three-year period from 2007 to 2010. The information connects student achievement to the teachers responsible for their progress, using a controversial statistical method called the “value-added” model. The method aims to measure how much academic growth students have achieved in a given year and attributes this growth, or lack thereof, to the influence of the student’s teacher in that subject.
The United Federation of Teachers fought unsuccessfully to prevent the public release of this information after five media organizations filed Freedom of Information requests for the ratings in 2010, the AP reported. The requests came after the Los Angeles Times published similar information for 6,000 Los Angeles teachers.
“The Department of Education should be ashamed of itself,” UFT President Michael Mulgrew reportedly said in a statement. “It has combined bad tests, a flawed formula, and incorrect data to mislead tens of thousands of parents about their children’s teachers.”
New York’s teachers already had seen their own reports, the AP reported, but the Feb. 24 release of the information makes it available to parents and others. Educators are worried that parents might misinterpret the ratings, and they argue that many other factors outside the classroom—including a child’s home life, parental support, and health issues, among others—also affect student achievement.
According to the AP, New York City Schools Chancellor Dennis Walcott has said he is concerned the rankings will be used to highlight individual teachers and hold some up to ridicule or shame. Based on what happened in Los Angeles, that’s not an unlikely scenario: One L.A. teacher, widely respected by students and colleagues, was so distraught by the Times’ rankings that he took his own life.
eSchool News has published numerous articles suggesting why efforts by education reformers to evaluate teachers according to their “value-added” rankings alone are flawed and misguided.
In this Feb. 22 Viewpoint, Baltimore educator Jay Gillen eloquently argues that a teacher’s value cannot be summed up by his or her students’ test scores:
“If we really care about the education of young people in poverty, we will stop focusing on test results and pay much more attention to the quality of life students and families endure. The more their parents and the students themselves are employed, the better their housing and transportation, the better their health care and nutrition, the more they learn…” Gillen writes.
In this story from November, we highlighted a report from the Center for American Progress, which concluded that publicly identifying teachers with value-added estimates of their abilities actually undermines efforts to improve public schools:
The report argues that value-added estimates shouldn’t be the sole determinate of a teacher’s worth, owing to other factors influencing student outcomes. While value-added estimates can be useful internal tools for helping to direct professional development efforts, the report says, they paint an incomplete picture that is misleading to the public and could have serious unintended consequences for teachers.
Even the Bill and Melinda Gates Foundation, one of the country’s biggest advocates for using student achievement data to help measure teacher quality, argues that school leaders should use “multiple measures” to evaluate teachers effectiveness:
In a progress report on the foundation’s $45 million “Measuring the Effectiveness of Teachers” project, researchers conclude that the value-added model holds promise as an evaluation tool—but it can only be effective when combined with “measures from different sources to get a more complete picture of teaching practice. … Value-added scores alone, while important, do not recommend specific ways for teachers to improve.”
The bottom line: It might be legally sanctioned, but publishing teacher “rankings” according to a model that is intended to be one small piece of a larger evaluation system is the wrong thing to do.
It’s morally wrong to subject teachers to this public humiliation without the benefit of necessary additional context. What’s more, it could set serious school reform efforts back even further.
In discussions about whether student test scores should factor into teacher evaluation systems, teachers’ unions often say they are concerned that the data will be used punitively, rather than as a tool to help teachers improve. What happened in New York and Los Angeles are Exhibits A and B for why unions hold this fear.
If education reformers are truly serious about improving teacher effectiveness, then they should join unions to fight the public release of “value-added” teacher rankings. The only way student achievement data can become an effective tool to improve teaching and learning is if educators can trust those who are using the data—and the only way to earn this trust is not to abuse it.
For more news and opinion about education reform, see: