Adding population structure to models of language evolution by iterated learning

Andrew Whalen, Thomas L. Griffiths

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


Previous work on iterated learning, a standard language learning paradigm where a sequence of learners learns a language from a previous learner, has found that if learners use a form of Bayesian inference, then the distribution of languages in a population will come to reflect the prior distribution assumed by the learners (Griffiths and Kalish 2007). We expand these results to allow for more complex population structures, and demonstrate that for learners on undirected graphs the distribution of languages will also reflect the prior distribution. We then use techniques borrowed from statistical physics to obtain deeper insight into language evolution, finding that although population structure will not influence the probability that an individual speaks a given language, it will influence how likely neighbors are to speak the same language. These analyses lift a restrictive assumption of iterated learning, and suggest that experimental and mathematical findings using iterated learning may apply to a wider range of settings.

Original languageEnglish (US)
Pages (from-to)1-6
Number of pages6
JournalJournal of Mathematical Psychology
StatePublished - Feb 1 2017
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General Psychology
  • Applied Mathematics


  • Bayesian models
  • Cultural evolution
  • Iterated learning
  • Language evolution
  • Voter models


Dive into the research topics of 'Adding population structure to models of language evolution by iterated learning'. Together they form a unique fingerprint.

Cite this