Statistical mechanics of letters in words

Greg J. Stephens, William Bialek

Research output: Contribution to journalArticlepeer-review

40 Scopus citations


We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial and arbitrary, we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of words, capturing ∼92% of the multi-information in four-letter words and even "discovering" words that were not represented in the data. These maximum entropy models incorporate letter interactions through a set of pairwise potentials and thus define an energy landscape on the space of possible words. Guided by the large letter redundancy we seek a lower-dimensional encoding of the letter distribution and show that distinctions between local minima in the landscape account for ∼68% of the four-letter entropy. We suggest that these states provide an effective vocabulary which is matched to the frequency of word use and much smaller than the full lexicon.

Original languageEnglish (US)
Article number066119
JournalPhysical Review E - Statistical, Nonlinear, and Soft Matter Physics
Issue number6
StatePublished - Jun 25 2010

All Science Journal Classification (ASJC) codes

  • Condensed Matter Physics
  • Statistical and Nonlinear Physics
  • Statistics and Probability


Dive into the research topics of 'Statistical mechanics of letters in words'. Together they form a unique fingerprint.

Cite this