Academic assessments are used to measure educational
attainment, assess proficiency, evaluate schools and programs, for
certification/licensure, and to inform other important decisions. For test
scores to validly indicate what students know and can do, students must give
good effort on the assessment. Measurement practitioners have long known that test
takers are not always engaged, however, and that disengagement can threaten
score validity and negatively bias test scoresi,ii. Disengagement has been seen
both in high-stakes timed tests, where some test takers rapidly-guess as time
runs out, and in untimed, low-stakes tests, where low test-taker motivation is
a more likely cause.
Before computer-based tests (CBTs) were introduced, inferences
about test taker engagement had to be made at the test event level, most
frequently by asking a test taker immediately after testing to report their own
level of engagement on the test. With CBTs, new insights are available: item
response time permits an item-by-item assessment of engagement through the
identification of rapid-guessing behavior. Using data from MAP® Growth™ assessments, an adaptive
assessment system for K-12 students, this research illustrates the nature of
rapid-guessing behavior, explores how it differs from solution behavior,
provides a model of what happens when test takers disengage, and addresses how
disengagement should be managed during scoring.
Research has shown that test takers rarely exhibit rapid-guessing
behavior throughout a test, but rather may move multiple times between solution
behavior and rapid guessing. Rapid guessing is affected by characteristics of
the item, the test taker, and the context in which the item is administered.
No comments:
Post a Comment