Abstract
Much of human learning and inference can be framed within the computational problem of relational generalization. In this project, we propose a Bayesian model that generalizes relational knowledge to novel environments by analogically weighting predictions from previously encountered relational structures. First, we show that this learner outperforms a naive, theory-based learner on relational data derived from random- and Wikipedia-based systems when experience with the environment is small. Next, we show how our formalization of analogical similarity translates to the selection and weighting of analogies. Finally, we combine the analogy- and theory-based learners in a single nonparametric Bayesian model, and show that optimal relational generalization transitions from relying on analogies to building a theory of the novel system with increasing experience in it. Beyond predicting unobserved interactions better than either baseline, this formalization gives a computational-level perspective on the formation and abstraction of analogies themselves.
Original language | English (US) |
---|---|
Pages | 2605-2611 |
Number of pages | 7 |
State | Published - 2020 |
Event | 42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020 - Virtual, Online Duration: Jul 29 2020 → Aug 1 2020 |
Conference
Conference | 42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020 |
---|---|
City | Virtual, Online |
Period | 7/29/20 → 8/1/20 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Computer Science Applications
- Human-Computer Interaction
- Cognitive Neuroscience
Keywords
- Bayesian models
- analogy
- generalization
- inference
- nonparametric statistics