Wednesday, January 2, 2019

Too much credit given to most recent teaher for improved student learning


The federal Race to the Top competition provided significant impetus for states to adopt value-added models as a part of their teacher evaluation systems. Such models typically link students to their teachers in the spring semester when statewide tests are administered and estimate a teacher's performance based on his or her students’ learning between the test date in the previous school year and the test date in the current year. Because of data limitations in many states, however, the effect of most student learning experiences between two consecutive tests cannot be distinguished from, and hence is often attributed to, the value added of teachers in the spring classrooms.

This study examines how teacher evaluations are affected by such misattribution and explores methods that can provide the best approximation in the absence of more detailed data. The reserchers find that ignoring previous school-year teachers’ contributions on student learning has a sizeable impact on estimated value-added scores for teachers in the current school year. They also present an alternative approach that can be implemented in the absence of more detailed data on student learning experiences and closely approximates teacher value-added scores that are estimated based on complete student enrollment and roster information.

No comments: