Evaluating computational models of explanation using human judgments

Michael Pacer, Joseph Williams, Xi Chen, Tania Lombrozo, Thomas L. Griffiths

Research output: Contribution to conferencePaperpeer-review

24 Scopus citations

Abstract

We evaluate four computational models of explanation in Bayesian networks by comparing model predictions to human judgments. In two experiments, we present human participants with causal structures for which the models make divergent predictions and either solicit the best explanation for an observed event (Experiment 1) or have participants rate provided explanations for an observed event (Experiment 2). Across two versions of two causal structures and across both experiments, we find that the Causal Explanation Tree and Most Relevant Explanation models provide better fits to human data than either Most Probable Explanation or Explanation Tree models. We identify strengths and shortcomings of these models and what they can reveal about human explanation. We conclude by suggesting the value of pursuing computational and psychological investigations of explanation in parallel.

Original languageEnglish (US)
Pages498-507
Number of pages10
StatePublished - 2013
Externally publishedYes
Event29th Conference on Uncertainty in Artificial Intelligence, UAI 2013 - Bellevue, WA, United States
Duration: Jul 11 2013Jul 15 2013

Other

Other29th Conference on Uncertainty in Artificial Intelligence, UAI 2013
Country/TerritoryUnited States
CityBellevue, WA
Period7/11/137/15/13

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Evaluating computational models of explanation using human judgments'. Together they form a unique fingerprint.

Cite this