Faster teaching by POMDP planning

Anna N. Rafferty, Emma Brunskill, Thomas L. Griffiths, Patrick Shafto

Research output: Chapter in Book/Report/Conference proceedingConference contribution

53 Scopus citations


Both human and automated tutors must infer what a student knows and plan future actions to maximize learning. Though substantial research has been done on tracking and modeling student learning, there has been significantly less attention on planning teaching actions and how the assumed student model impacts the resulting plans. We frame the problem of optimally selecting teaching actions using a decision-theoretic approach and show how to formulate teaching as a partially-observable Markov decision process (POMDP) planning problem. We consider three models of student learning and present approximate methods for finding optimal teaching actions given the large state and action spaces that arise in teaching. An experimental evaluation of the resulting policies on a simple concept-learning task shows that framing teacher action planning as a POMDP can accelerate learning relative to baseline performance.

Original languageEnglish (US)
Title of host publicationArtificial Intelligence in Education - 15th International Conference, AIED 2011
Number of pages8
StatePublished - 2011
Externally publishedYes
Event15th International Conference on Artificial Intelligence in Education, AIED 2011 - Auckland, New Zealand
Duration: Jun 28 2011Jul 1 2011

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume6738 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Other15th International Conference on Artificial Intelligence in Education, AIED 2011
Country/TerritoryNew Zealand

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Faster teaching by POMDP planning'. Together they form a unique fingerprint.

Cite this