Abstract
Language research has come to rely heavily on large-scale, web-based datasets. These datasets can present significant methodological challenges, requiring researchers to make a number of decisions about how they are collected, represented, and analyzed. These decisions often concern long-standing challenges in corpus-based language research, including determining what counts as a word, deciding which words should be analyzed, and matching sets of words across languages. We illustrate these challenges by revisiting “Word lengths are optimized for efficient communication” (Piantadosi, Tily, & Gibson, 2011), which found that word lengths in 11 languages are more strongly correlated with their average predictability (or average information content) than their frequency. Using what we argue to be best practices for large-scale corpus analyses, we find significantly attenuated support for this result and demonstrate that a stronger relationship obtains between word frequency and length for a majority of the languages in the sample. We consider the implications of the results for language research more broadly and provide several recommendations to researchers regarding best practices.
Original language | English (US) |
---|---|
Article number | e12983 |
Journal | Cognitive science |
Volume | 45 |
Issue number | 6 |
DOIs | |
State | Published - Jun 2021 |
All Science Journal Classification (ASJC) codes
- Experimental and Cognitive Psychology
- Cognitive Neuroscience
- Artificial Intelligence
Keywords
- Compression
- Corpus linguistics
- Information theory
- Linguistic universals
- Noisy channel communication
- Uniform information density
- n-Gram models