Privacy Notice
You can find more information on the used cookies and how you can subsequently revoke your consent in our Privacy Policy.
By | Marcus Kubsch, Adrian Grimm, Knut Neumann, Hendrik Drachsler, Nikol Rummel |
Original language | English |
Published in | Xiaoming Zhai, Joseph Krajcik (Eds.), Uses of artificial intelligence in STEM education |
Pages | 230-249 |
Editor (Publisher) | Oxford University Press |
ISBN | 9780198882077, 9780191991226 |
DOI/Link | https://doi.org/10.1093/oso/9780198882077.003.0011 |
Publication status | Published – 10.2024 |
Providing individualized support to students greatly improves learning but is challenging for teachers to provide for all learners. The challenge is to continuously assess students’ learning and then provide respectively individualized support. Here, AI-based systems have the potential to help provide individualized support for all learners through automation. While recent work has demonstrated that traditional assessments can validly and reliably be scored automatically, it remains unclear how to automatically assess students’ learning on the granularity needed for individualized support, i.e., on a task-to-task basis as students engage in learning. Drawing on evidence-centered design, we developed a framework for how to build AI-based assessments for existing, high-quality digital learning environments that provide valid and reliable information about student learning based on the artifacts that students produce as they engage in tasks. We demonstrate how we applied the framework and discuss questions of validity and bias of the resulting assessments.
You can find more information on the used cookies and how you can subsequently revoke your consent in our Privacy Policy.