Abstract
As enrollments and class sizes in postsecondary institutions have increased, instructors have sought automated and lightweight means to identify students who are at risk of performing poorly in a course. This identification must be performed early enough in the term to allow instructors to assist those students before they fall irreparably behind. This study describes a modeling methodology that predicts student final exam scores in the third week of the term by using the clicker data that is automatically collected for instructors when they employ the Peer Instruction pedagogy. The modeling technique uses a support vector machine binary classifier, trained on one term of a course, to predict outcomes in the subsequent term. We applied this modeling technique to five different courses across the computer science curriculum, taught by three different instructors at two different institutions. Our modeling approach includes a set of strengths not seen wholesale in prior work, while maintaining competitive levels of accuracy with that work. These strengths include using a lightweight source of student data, affording early detection of struggling students, and predicting outcomes across terms in a natural setting (different final exams, minor changes to course content), across multiple courses in a curriculum, and across multiple institutions.
Original language | English (US) |
---|---|
Article number | 18 |
Journal | ACM Transactions on Computing Education |
Volume | 19 |
Issue number | 3 |
DOIs | |
State | Published - Jan 2019 |
All Science Journal Classification (ASJC) codes
- General Computer Science
- Education
Keywords
- At-risk students
- Clicker data
- Cross-term
- Machine learning
- Multiinstitution
- Peer instruction
- Prediction