Using evidence-centered design to develop an automated system for tracking students' physics learning in a digital learning environment

Contribution to collected edition/anthology › Research › Peer reviewed

Publication data


ByMarcus Kubsch, Adrian Grimm, Knut Neumann, Hendrik Drachsler, Nikol Rummel
Original languageEnglish
Published inXiaoming Zhai, Joseph Krajcik (Eds.), Uses of artificial intelligence in STEM education
Pages230-249
Editor (Publisher)Oxford University Press
ISBN9780198882077, 9780191991226
DOI/Linkhttps://doi.org/10.1093/oso/9780198882077.003.0011 (Open Access)
Publication statusPublished – 10.2024

Providing individualized support to students greatly improves learning but is challenging for teachers to provide for all learners. The challenge is to continuously assess students’ learning and then provide respectively individualized support. Here, AI-based systems have the potential to help provide individualized support for all learners through automation. While recent work has demonstrated that traditional assessments can validly and reliably be scored automatically, it remains unclear how to automatically assess students’ learning on the granularity needed for individualized support, i.e., on a task-to-task basis as students engage in learning. Drawing on evidence-centered design, we developed a framework for how to build AI-based assessments for existing, high-quality digital learning environments that provide valid and reliable information about student learning based on the artifacts that students produce as they engage in tasks. We demonstrate how we applied the framework and discuss questions of validity and bias of the resulting assessments.