Wednesday, June 5, 2024

Impact of AI assistance on student agency

 

Highlights

  • The use of AI in Edtech to scaffold and personalise learning is on the rise.

  • However, the impact of AI-powered Edtech on student agency is underexplored.

  • This paper investigates AI's impact on student agency via a study in peer feedback.

  • Results indicate a student tendency to rely on, rather than learn from, AI.

  • Self-regulated approaches can help replace AI assistance but are not as effective.

Abstract

AI-powered learning technologies are increasingly being used to automate and scaffold learning activities (e.g., personalised reminders for completing tasks, automated real-time feedback for improving writing, or recommendations for when and what to study). While the prevailing view is that these technologies generally have a positive effect on student learning, their impact on students’ agency and ability to self-regulate their learning is under-explored. Do students learn from the regular, detailed and personalised feedback provided by AI systems, and will they continue to exhibit similar behaviour in the absence of assistance? Or do they instead continue to rely on AI assistance without learning from it? 

To contribute to filling this research gap, this report describes a randomised controlled experiment that explored the impact of AI assistance on student agency in the context of peer feedback. With 1625 students across 10 courses, an experiment was conducted using peer review. During the initial four-week period, students were guided by AI features that utilised techniques such as rule-based suggestion detection, semantic similarity, and comparison with previous comments made by the reviewer to enhance their submissions if the feedback provided was deemed insufficiently detailed or general in nature. Over the following four weeks, students were divided into four different groups: control (AI) received prompts, (NR) received no prompts, (SR) received self-monitoring checklists in place of AI prompts, and (SAI) had access to both AI prompts and self-monitoring checklists. 

Results of the experiment suggest that students tended to rely on rather than learn from AI assistance. If AI assistance was removed, self-regulated strategies could help fill the gap but were not as effective as AI assistance. Results also showed that hybrid human-AI approaches that complement AI assistance with self-regulated strategies (SAI) were not more effective than AI assistance on its own.


No comments: