Abstract
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widely-used Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectation-maximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.
Original language | English (US) |
---|---|
Pages (from-to) | 441-480 |
Number of pages | 40 |
Journal | Cognitive science |
Volume | 31 |
Issue number | 3 |
DOIs | |
State | Published - 2007 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Experimental and Cognitive Psychology
- Artificial Intelligence
- Cognitive Neuroscience
Keywords
- Bayesian models
- Cultural transmission
- Iterated learning
- Language evolution