Newspaper Gets an “F” for Its Teacher Evaluations

Monday, June 25, 2012

 

A method for rating teachers based on student test scores, co-developed and publicized by the Los Angeles Times in recent years, has been criticized by the National Education Policy Center (NEPC) as “inaccurate due to unreliable methodology.”

The Times has written dozens of stories about the so-called value-added system that measures year-to-year student progress on standardized tests and uses the resulting data to estimate the effectiveness of teachers. The newspaper publishes its results for K-12 students on a regular basis.

The approach is controversial among researchers and educators, although growing in its use among communities searching for ways to improve troubled educational systems. Use of a similar rating system is a hotly contested issue between the sprawling Los Angeles Unified School District administrators and teachers. The teachers’ union filed an unfair labor practices with the state Public Employment Relations Board in May complaining that the district was attempting to create a value-added system unilaterally, without negotiating first with the union.

Advocates for the method see it as a critical tool for improving schools that brings a measure of objective analysis to teacher evaluations, replacing a largely inefficient, subjective system that prevents the weeding out of bad instructors. Critics say the method’s results are inaccurate, and that it simplistically scapegoats teachers while ignoring complicated issues of funding, curriculum, the role of families and societal influences in a modern, electronic age.   

The Times website that hosts the ratings classifies teachers as “least effective, less effective, average, more effective and most effective,” prompting Bill Gates, co-chair of the Bill and Melinda Gates Foundation, which supports the Measures of Effective Teaching Project, to call them a “public shaming.” A New York Post editorial called the ratings a “big victory” for “accountability and transparency.”

The NEPC study’s author, Catherine S. Durso, summarized the complaints by critics of the value-added method in general and as used by the Times: “[They] do not provide guidance for improvement, are comparative rather than absolute measures, assess a small part of teacher’s responsibilities, force different kinds of teaching into one scale, do not produce consistent results for given teachers over time, and may not identify effects actually caused by the teachers.”

The highly technical study made a number of  criticisms of the model used by the Times, which was adapted for its teachers report in 2010 from one developed by Richard Buddin, a senior economist at RAND Corporation, and revised in 2011. The NEPC study blamed the Times data’s “imprecision and inaccuracy” on a raft of technical mistakes, including a failure to properly incorporate outside factors (like a teacher changing schools).  

–Ken Broder

 

To Learn More:

An Analysis of the Use and Validity of Test-Based Teacher Evaluations Reported by the Los Angeles Times: 2011 (by Catherine S. Durso, National Education Policy Center)

Test-based Teacher Evaluation Earns an F, Again* (by P.L. Thomas, DailyKos)

Teachers Union Challenges L.A. Unified's New Evaluation Process (by Howard Blume, Los Angeles Times)

Getting Teacher Evaluation Right (by Valerie Strauss, Washington Post)

Due Diligence and the Evaluation of Teachers (by Derek C. Briggs and Ben Domingue, National Education Policy Center)

Grading the Teachers: Value-Added Analysis (Los Angeles Times)

 

Leave a comment