When regularization gets it wrong: Children over-simplify language input only in production

Research output: Contribution to journalArticlepeer-review

22 Scopus citations


Children tend to regularize their productions when exposed to artificial languages, an advantageous response to unpredictable variation. But generalizations in natural languages are typically conditioned by factors that children ultimately learn. In two experiments, adult and six-year-old learners witnessed two novel classifiers, probabilistically conditioned by semantics. Whereas adults displayed high accuracy in their productions- A pplying the semantic criteria to familiar and novel items-children were oblivious to the semantic conditioning. Instead, children regularized their productions, over-relying on only one classifier. However, in a two-alternative forced-choice task, children's performance revealed greater respect for the system's complexity: They selected both classifiers equally, without bias toward one or the other, and displayed better accuracy on familiar items. Given that natural languages are conditioned by multiple factors that children successfully learn, we suggest that their tendency to simplify in production stems from retrieval difficulty when a complex system has not yet been fully learned.

Original languageEnglish (US)
Pages (from-to)1054-1072
Number of pages19
JournalJournal of Child Language
Issue number5
StatePublished - Sep 1 2018

All Science Journal Classification (ASJC) codes

  • Experimental and Cognitive Psychology
  • Developmental and Educational Psychology
  • Language and Linguistics
  • Linguistics and Language
  • General Psychology


  • generalization
  • language acquisition
  • probability boosting


Dive into the research topics of 'When regularization gets it wrong: Children over-simplify language input only in production'. Together they form a unique fingerprint.

Cite this