Tuesday, May 8, 2018

Rating Teacher-Preparation Programs


Recent policies intended to improve teacher quality have focused on the preparation that teachers receive before entering the classroom. A short-lived federal rule would have required every state to assess and rank teacher-preparation programs by their graduates’ impact on student learning. Though the federal rule was repealed, last year some 21 states and the District of Columbia opted to rank teacher-preparation programs by measures of their graduates’ effectiveness in the classroom, such as their value-added scores.

But what does the research say? Do teachers from different preparation programs differ substantially in their impacts? Can outcomes like student test performance reliably identify more or less effective teacher-preparation programs?

To address these questions, this study re-analyzed prior evaluations of teacher-preparation programs from six locations: Florida, Louisiana, Missouri, Texas, Washington State, and New York City. The researchers found negligible differences in teacher quality between programs, amounting to no more than 3 percent of the average test-score gap between students from low-income families and their more affluent peers. Differences between programs were negligible even in Louisiana and New York City, where earlier evaluations had reported substantial differences and fueled the push for program accountability.

Most differences between programs would be too small to matter, even if we could measure them accurately. And we can rarely measure them accurately. The errors we make in estimating program differences are often larger than the differences we are trying to estimate. With rare exceptions, we cannot use student test scores to say whether a given program’s teachers are significantly better or worse than average. If policymakers want to hold preparation programs accountable for the quality of their graduates, there may be better ways to do it.

No comments: