We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial and arbitrary, we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of words, capturing ∼92% of the multi-information in four-letter words and even "discovering" words that were not represented in the data. These maximum entropy models incorporate letter interactions through a set of pairwise potentials and thus define an energy landscape on the space of possible words. Guided by the large letter redundancy we seek a lower-dimensional encoding of the letter distribution and show that distinctions between local minima in the landscape account for ∼68% of the four-letter entropy. We suggest that these states provide an effective vocabulary which is matched to the frequency of word use and much smaller than the full lexicon.
|Original language||English (US)|
|Journal||Physical Review E - Statistical, Nonlinear, and Soft Matter Physics|
|State||Published - Jun 25 2010|
All Science Journal Classification (ASJC) codes
- Statistical and Nonlinear Physics
- Statistics and Probability
- Condensed Matter Physics