Monday, June 8, 2015
Students with Disabilities: Report wrongly ties test-based accountability policies to better student outcomes
A recent report from the Center for American Progress (CAP) claims that the quality of education for students with disabilities has improved and that stringent accountability measures are somehow behind those improvements. Yet a review published today explains that, while some student outcomes have improved, the report’s data and analyses are far too weak to provide any causal evidence.
Edward G. Fierros and Katherine Cosner of Villanova University reviewed ESEA Reauthorization: How We Can Build Upon No Child Left Behind’s Progress for Students with Disabilities in a Reauthorized ESEA for the Think Twice think tank review project. The CAP report was authored by Chelsea Straus. The review is published by the National Education Policy Center, housed at the University of Colorado Boulder School of Education.
Dr. Fierros is an Associate Professor and Chairperson in the Department of Education and Counseling at Villanova University. Katherine Cosner is a graduate assistant at Villanova.
As Congress considers the reauthorization of the ESEA, the new CAP report tries to convince legislators that they must continue a test-based system designed to hold students with disabilities to high standards. The argument is based on superficial comparisons—the year 2000 versus the year 2013—of NAEP performance outcomes (average NAEP scale scores), graduation rates, and dropout rates for students with disabilities.
While the report correctly states, “We cannot demonstrate causality” (p. 2), it then proceeds to strongly imply causality (i.e., that NCLB-like policies must be continued in order to sustain increases in educational outcomes for students with disabilities). Professor Fierros explains, “This report tries to have it both ways. Its entire reason for existence is to convince readers of a causal link between these policies and the improved outcomes. But it carefully includes a statement saying that it can’t do just that.”
The reviewers point out that aggregating national data over a 14-year period obscures a number of possible other interpretations and variations. There was “not a single reference to a peer reviewed or generally accepted research report” that would have provided a more complete picture of the performance of students with disabilities. Education Week’s Diploma Counts 2015 Report finds a great deal of state variation in the percentage of students with disabilities graduating with a standard diploma. Moreover, states use a variety of ways to determine what constitutes “graduating” for students with disabilities.
Because of the failure to use all available data, consider intervening variables, or utilize a more focused research approach, the report’s interpretations and conclusions are unjustified and cannot be used to advance public policy.