Abstract
Adaptive online courses are designed to automatically customize material for different users, typically based on data captured during the course. Assessing the quality of these adaptive courses, however, can be difficult. Traditional assessment methods for (machine) learning algorithms, such as comparison against a ground truth, are often unavailable due to education’s unique goal of affecting both internal user knowledge, which cannot be directly measured, as well as external, measurable performance. Traditional metrics for education like quiz scores, on the other hand, do not necessarily capture the adaptive course’s ability to present the right material to different users. In this work, we present a mathematical framework for developing scalable, efficiently computable metrics for these courses that can be used by instructors to gauge the efficacy of the adaptation and their course content. Our metric framework takes as input a set of quantities describing user activities in the course, and balances definitions of user consistency and overall efficacy as inferred by the quantity distributions. We support the metric definitions by comparing the results of a comprehensive statistical analysis with a sample metric evaluation on a dataset of roughly 5,000 users from an online chess platform. In doing so, we find that our metrics yield important insights about the course that are embedded in the larger statistical analysis, as well as additional insights into student drop-off rates.
Original language | English (US) |
---|---|
State | Published - Jan 1 2018 |
Externally published | Yes |
Event | 11th International Conference on Educational Data Mining, EDM 2018 - Buffalo, United States Duration: Jul 15 2018 → Jul 18 2018 |
Conference
Conference | 11th International Conference on Educational Data Mining, EDM 2018 |
---|---|
Country/Territory | United States |
City | Buffalo |
Period | 7/15/18 → 7/18/18 |
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Information Systems