Tuesday, July 16, 2013
CREDO’s Significantly Insignificant Charter Schools Findings
Even Setting Aside Its Analytical Flaws, Study Merely Confirms that Charter Schools Perform on Par with Traditional Public Schools
The Center for Research on Education Outcomes (CREDO) at Stanford University announced in a June 25th press release that “charter school students now have greater learning gains in reading than their peers in traditional public schools.” This conclusion was repeated in newspapers across the nation. But there is much less to the CREDO study and to its claim than meets the eye, according to a new review.
The National Charter School Study 2013 examined charter schools in 27 states and New York City. Andrew Maul and Abby McClelland reviewed the study for the Think Twice think tank review project. The review is published today by the National Education Policy Center, housed at the University of Colorado Boulder School of Education.
Maul is an assistant professor in the Research and Evaluation Methodology (REM) program at CU Boulder. His work focuses on measurement theory, validity, and generalized latent variable modeling. McClelland is a Ph.D. student in the REM program.
The CREDO study attempts to identify differences in student performance at charter schools and traditional public schools. Its primary findings were: (a) a small positive effect of being in a charter school on reading scores and no impact on math scores; and (b) a relative improvement in average charter school quality since CREDO’s 2009 study.
Maul and McClelland, however, find “significant reasons for caution in interpreting the study’s results.” Some of those reasons concern important choices regarding analytic methods; others concern basic questions of how meaningful the findings really are.
The reviewers point out that the statistical approach used to compare charter students to so-called virtual twins in traditional public schools may not adequately control for differences between families who select a charter school and those who do not, which could bias the results.
CREDO’s researchers also don’t sufficiently justify their estimation of growth, which they express using the problematic “days of learning” approach. As well, they use regression models that fail to address independence of observations and the absence of measurement error – two key assumptions required in such analyses.
These ‘technical’ concerns could easily implicate differences in the study’s results that are substantially larger than the highlighted results attributed to differences between charter schools and traditional public schools.
Even if concerns over the study’s analytic methods are set side, however, Maul and McClelland point out that the study itself shows only a tiny real impact on the part of charter schools: “less than one hundredth of one percent of the variation in test performance is explainable by charter school enrollment,” they write. Specifically, students in charter schools were estimated to score approximately 0.01 standard deviations higher on reading tests and 0.005 standard deviations lower on math tests than their peers in traditional public schools.
“With a very large sample size, nearly any effect will be statistically significant,” the reviewers conclude, “but in practical terms these effects are so small as to be regarded, without hyperbole, as trivial.”