The Challenges of Large-Scale, Web-Based Language Datasets: Word Length and Predictability Revisited

Stephan C. Meylan, Thomas L. Griffiths

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


Language research has come to rely heavily on large-scale, web-based datasets. These datasets can present significant methodological challenges, requiring researchers to make a number of decisions about how they are collected, represented, and analyzed. These decisions often concern long-standing challenges in corpus-based language research, including determining what counts as a word, deciding which words should be analyzed, and matching sets of words across languages. We illustrate these challenges by revisiting “Word lengths are optimized for efficient communication” (Piantadosi, Tily, & Gibson, 2011), which found that word lengths in 11 languages are more strongly correlated with their average predictability (or average information content) than their frequency. Using what we argue to be best practices for large-scale corpus analyses, we find significantly attenuated support for this result and demonstrate that a stronger relationship obtains between word frequency and length for a majority of the languages in the sample. We consider the implications of the results for language research more broadly and provide several recommendations to researchers regarding best practices.

Original languageEnglish (US)
Article numbere12983
JournalCognitive science
Issue number6
StatePublished - Jun 2021

All Science Journal Classification (ASJC) codes

  • Experimental and Cognitive Psychology
  • Cognitive Neuroscience
  • Artificial Intelligence


  • Compression
  • Corpus linguistics
  • Information theory
  • Linguistic universals
  • Noisy channel communication
  • Uniform information density
  • n-Gram models


Dive into the research topics of 'The Challenges of Large-Scale, Web-Based Language Datasets: Word Length and Predictability Revisited'. Together they form a unique fingerprint.

Cite this