Tuesday, April 30, 2013
KIPP Study is Useful, but It Overreaches
Do middle schools operated by the Knowledge Is Power Program (KIPP) excel in promoting student achievement? A new review gives a second look at a recent study that answered that question “yes.”
In a study released in February, Mathematica Policy Research concluded that, based on achievement test scores, KIPP middle school students had substantially higher test scores than comparison non-KIPP students. A review published today of the Mathematica study finds that while the original evaluation was carefully planned and executed and a positive impact is supported by the evidence, the authors may have overstated the benefits attributable to KIPP.
The study, KIPP Middle Schools: Impacts on Achievement and Other Outcomes, was reviewed for the Think Twice think tank review project by Professor Gregory Camilli of the University of Colorado Boulder. The review is published by the National Education Policy Center, housed at the CU Boulder School of Education.
Using two different approaches, the Mathematica researchers concluded that the KIPP students scored, after three years, higher than comparison students not attending KIPP schools at the equivalent of 11 months of additional learning in math and eight months in reading. Camilli observes that the results are similar in size to those found from some previous educational experiments, including a small experiment with KIPP schools.
Camilli explains that while the KIPP outcomes could be substantial if they were found to persist into later grades, the benefits appear to be overstated in the report.
For one thing, Camilli points out, “translating educational outcomes into ‘months’ of additional learning is an inexact science and can lead to absurd results if taken literally.” Relative to one month of learning from grade 11-12, for example, information supplied by test publishers could be used (in this case misused) to demonstrate that children learn the equivalent of 10 years from kindergarten to first grade. Additionally, reported measures of effectiveness that attempt to take attrition into account are smaller than the estimates used to draw conclusions about the effectiveness of KIPP.
He also finds that the certainty with which the report attributes the effect of KIPP on higher-order reasoning skills is not borne out by evidence – and that that topic “requires additional empirical work to provide greater clarity.”
The Mathematica researchers also found that the impact of KIPP was unevenly distributed across KIPP schools. For example, most, but not all, KIPP schools had a positive impact. Though a few clues emerged from the data, the factors leading to this variation could not be identified. For this reason, Camilli writes that information provided in the report is not complete enough to guide education policy. Finally, Camilli advises that “Future work evaluating the persistence of KIPP impact will be key to drawing a conclusive judgment of the educational significance of KIPP schooling.”
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment