Abstract
Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the co-occurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge-identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computational-level analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domain-general statistical inference guided by domain-specific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge-the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships-and show how they provide the constraints that people need to induce useful causal models from sparse data.
Original language | English (US) |
---|---|
Pages (from-to) | 661-716 |
Number of pages | 56 |
Journal | Psychological Review |
Volume | 116 |
Issue number | 4 |
DOIs | |
State | Published - Oct 2009 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- General Psychology
Keywords
- Bayesian modeling
- causal induction
- intuitive theories
- rational analysis