Why releasing teacher ‘rankings’ is a bad idea

New York City on Feb. 24 became the latest municipality to release the “value-added” rankings of thousands of public school teachers.

Editorial—New York City on Feb. 24 became the latest municipality to release the “value-added” rankings of thousands of public school teachers, despite opposition from the city’s teachers’ union, which lost a lawsuit to prevent the release of this information. Here’s why the city’s move—and others like it—are seriously misguided.

According to the Associated Press (AP), the rankings track 18,000 math and English public school teachers from fourth through eighth grades over a three-year period from 2007 to 2010. The information connects student achievement to the teachers responsible for their progress, using a controversial statistical method called the “value-added” model. The method aims to measure how much academic growth students have achieved in a given year and attributes this growth, or lack thereof, to the influence of the student’s teacher in that subject.

The United Federation of Teachers fought unsuccessfully to prevent the public release of this information after five media organizations filed Freedom of Information requests for the ratings in 2010, the AP reported. The requests came after the Los Angeles Times published similar information for 6,000 Los Angeles teachers.…Read More

Readers sound off on value-added model, district efficiency

While reader response was mixed, many readers were skeptical of these new measures.

In recent eSchool News stories, we asked readers if teachers should be evaluated using the value-added model, which uses a student’s past performance on high-stakes tests to determine how much “value” a teacher has added in a given year, and whether school districts should be judged based on their efficiency—that is, how well their students achieve in comparison to how much the district spends on each child. The results are in, and our readers were largely skeptical of these controversial measures.

In Contributing Editor Cara Erenben’s story, “Should student test scores be used to evaluate teachers?” Erenben reports on the early results from a Gates Foundation study suggesting that researchers have found some validity in the value-added model. But when asked, “Should the value-added model be used to evaluate teachers?” only four percent of readers said this was a “valid and objective tool for measuring effectiveness.”

Fifty-four percent of readers said the model should be used, “but only in conjunction with other measures of teacher performance.” Forty-two percent of readers said they think the model is “unreliable.”…Read More

Should student test scores be used to evaluate teachers?

Teachers who lead students to achievement gains in one year or in one class tend to do so in other years and other classes, the report said.

The so-called value-added model is an “imperfect, but still informative” measure of teacher effectiveness, especially when it is combined with other measures, according to the preliminary results of a large-scale study funded by the Bill and Melinda Gates Foundation. The study’s early findings have ratcheted up the debate over whether student test scores should be used in evaluating teachers—and if so, how.

The report, entitled “Learning About Teaching: Initial Findings from the Measures of Effective Teaching Project,” reportedly gives the strongest evidence to date of the validity of the value-added model as a tool to measure teacher effectiveness.

The $45-million Measures of Effective Teaching (MET) Project  began in the fall of 2009 with the goal of building “fair and reliable systems for teacher observation and feedback.”…Read More

Putting our ideas of assessment to the test

 

How we evaluate students, and teachers, is at a crossroads.
How we evaluate students, and teachers, is at a crossroads.

 

Default Lines column, October 2010 issue of eSchool News—Here’s a pop quiz: What are the skills that today’s students will need to be successful in tomorrow’s workplace?…Read More