Expansion of the
use of student test score data to measure teacher performance has fueled recent
policy interest in using those data to measure the effects of school
administrators as well. However, little research has considered the capacity of
student performance data to uncover principal effects.
Filling this gap,
this article
identifies multiple conceptual approaches for capturing the contributions of
principals to student test score growth, develops empirical models to reflect
these approaches, examines the properties of these models, and compares the
results of the models empirically using data from a large urban school
district. The article then assesses the degree to which the estimates from each
model are consistent with measures of principal performance that come from
sources other than student test scores, such as school district evaluations.
The results show
that choice of model is substantively important for assessment. While some models
identify principal effects as large as 0.18 standard deviations in math and
0.12 in reading, others find effects as low as 0.0.05 (math) or 0.03 (reading)
for the same principals.
The
most conceptually unappealing models, which over-attribute school effects to
principals, align more closely with nontest measures than do approaches that
more convincingly separate the effect of the principal from the effects of
other school inputs.
No comments:
Post a Comment