Ω
In 2007, NWEA and the Thomas B. Fordham Institute collaborated on The Proficiency Illusion, a study that illustrated the issues created by having each state set its own standards for what constitutes student proficiency for reading and mathematics tests, while holding all states to the same accountability standards. By comparing the cut scores that determine proficiency for each state, the researchers found that there was significant variation in the difficulty of proficiency levels among states. In some states, it is considerably easier for a student to pass their state test than it is for students in other states.
This report updates the original study.
In the four years since the study was published, the educational landscape has changed in many ways. For instance, the No Child Left Behind (NCLB) legislation required states to achieve 100% proficiency for students by the year 2014. As the deadline draws closer, most states fall far short of reaching that goal, creating an incentive for states to lower their standards1.
Another major change since the publication of The Proficiency Illusion is the increasingly widespread advocacy by educators and policymakers for shared content standards among states. The National Governors’ Association, in collaboration with the Chief Council of State School Officers, created a set of common core curriculum standards for use throughout the country, and several states have already adopted these standards. Concurrently, it is recognized that new assessments will be needed to measure student learning in relation to these standards. As states affiliate themselves with one of the two newly formed assessment consortia that will be developing new systems to measure student proficiency and progress, discussions are happening across the country about how to maintain local control over education while still ensuring rigorous national expectations.
Yet another major change underway across the country is the pressure by educational officials and policymakers to measure the effectiveness of teachers using student assessment data. Federal grant programs such as Race to the Top have required using student data in teacher evaluations, and many states have worked with teacher unions to implement systems to use student data to measure teacher effectiveness. As schools and states begin to design and implement their evaluation programs, important decisions about performance pay, promotion, tenure, and dismissal are being made based on underlying data that was never intended for such uses. The application of such data based on inconsistent state-defined proficiency levels means that these evaluation programs may not be producing the desired effect.
This report updates the original study so that it might inform the next generation of policies governing our nation’s schools. In the last decade, the term "proficiency rate" has entered the mainstream lexicon as a measure of school quality, with most people having at least an intuitive understanding that proficiency rates are defined as the number of students who pass the state test, divided by the number of students taking the test. What may be less understood by the general public, however, is that "proficiency" has no objective meaning; it is largely determined by the choices a state makes in creating its assessment standards, and is not connected to any external criteria (such as college readiness) that are independent of the test. The purpose of this study is to shine some light on the limitations of using proficiency rates based on inconsistent and arbitrary "passing scores" to make judgments about educational effectiveness.
The researchers have also created an online, interactive data gallery where users can explore different states, subjects, and grades to see how proficiency rates change under different circumstances.
No comments:
Post a Comment