TY - JOUR
T1 - The evolution of frequency distributions
T2 - Relating regularization to inductive biases through iterated learning
AU - Reali, Florencia
AU - Griffiths, Thomas L.
N1 - Funding Information:
We thank Athena Vouloumanos for providing the materials used in the experiments, and Carla Hudson Kam and Fei Xu for suggestions. We also thank Aaron Beppu, Matt Cammann, Jason Martin, Vlad Shut and Linsey Smith for assistance in running the experiments. This work was supported by grants BCS-0631518 and BCS-0704034 from the National Science Foundation.
PY - 2009/6
Y1 - 2009/6
N2 - The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this paper we explore how regular linguistic structures can emerge from language evolution by iterated learning, in which one person's linguistic output is used to generate the linguistic input provided to the next person. We use a model of iterated learning with Bayesian agents to show that this process can result in regularization when learners have the appropriate inductive biases. We then present three experiments demonstrating that simulating the process of language evolution in the laboratory can reveal biases towards regularization that might not otherwise be obvious, allowing weak biases to have strong effects. The results of these experiments suggest that people tend to regularize inconsistent word-meaning mappings, and that even a weak bias towards regularization can allow regular languages to be produced via language evolution by iterated learning.
AB - The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this paper we explore how regular linguistic structures can emerge from language evolution by iterated learning, in which one person's linguistic output is used to generate the linguistic input provided to the next person. We use a model of iterated learning with Bayesian agents to show that this process can result in regularization when learners have the appropriate inductive biases. We then present three experiments demonstrating that simulating the process of language evolution in the laboratory can reveal biases towards regularization that might not otherwise be obvious, allowing weak biases to have strong effects. The results of these experiments suggest that people tend to regularize inconsistent word-meaning mappings, and that even a weak bias towards regularization can allow regular languages to be produced via language evolution by iterated learning.
KW - Bayesian models
KW - Frequency distributions
KW - Iterated learning
KW - Language acquisition
KW - Word learning
UR - http://www.scopus.com/inward/record.url?scp=67349178599&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=67349178599&partnerID=8YFLogxK
U2 - 10.1016/j.cognition.2009.02.012
DO - 10.1016/j.cognition.2009.02.012
M3 - Article
C2 - 19327759
AN - SCOPUS:67349178599
SN - 0010-0277
VL - 111
SP - 317
EP - 328
JO - Cognition
JF - Cognition
IS - 3
ER -