Several systemic factors make it difficult for the developers of online courses to assess students' proficiency accurately. First, the average 10-15 test questions are too few to produce an accurate and reliable measure of knowledge. Second, the use of multiple-choice questions leads to guessing and a distortion of the results. Third, frequent use of the same set of correct answers as a measure of proficiency makes it difficult to compare students when the test is updated even slightly.
Researchers of the Higher School of Economics and the University of Leuven managed to solve these problems by expanding the classic Rasch model with additional parameters.
'First, our expanded approach includes the effect of multiple attempts, making it possible to distinguish between students who guess and those who know the answers,' said HSE Centre for Psychometrics in eLearning Head Dmitry Abbakumov. 'Second, because the knowledge metrics obtained with this expanded approach are expressed on a single scale, they can be compared even when the test questions are changed significantly. And finally, we calculate metrics based not only on test results, but also by taking into account the student's experience -- their activity when watching videos and performance in hands-on sessions -- providing a more comprehensive understanding of the student's competence.'
In the future, the approach proposed by the researchers could be used in assessment engines on educational platforms to obtain more accurate measurements of students' knowledge. And the metrics could be built into the navigation and recommendation solutions in digital education.
No comments:
Post a Comment