Producing power-law distributions and damping word frequencies with two-stage language models

Sharon Goldwater, Thomas L. Griffiths, Mark Johnson

Research output: Contribution to journalArticlepeer-review

52 Scopus citations

Abstract

Standard statistical models of language fail to capture one of the most striking properties of natural languages: the power-law distribution in the frequencies of word tokens. We present a framework for developing statisticalmodels that can generically produce power laws, breaking generativemodels into two stages. The first stage, the generator, can be any standard probabilistic model, while the second stage, the adaptor, transforms the word frequencies of this model to provide a closer match to natural language. We show that two commonly used Bayesian models, the Dirichlet-multinomial model and the Dirichlet process, can be viewed as special cases of our framework. We discuss two stochastic processes-the Chinese restaurant process and its two-parameter generalization based on the Pitman-Yor process-that can be used as adaptors in our framework to produce power-law distributions over word frequencies. We show that these adaptors justify common estimation procedures based on logarithmic or inverse-power transformations of empirical frequencies. In addition, taking the Pitman-Yor Chinese restaurant process as an adaptor justifies the appearance of type frequencies in formal analyses of natural language and improves the performance of a model for unsupervised learning of morphology.

Original languageEnglish (US)
Pages (from-to)2335-2382
Number of pages48
JournalJournal of Machine Learning Research
Volume12
StatePublished - Jul 2011
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Keywords

  • Language model
  • Nonparametric Bayes
  • Pitman-Yor process
  • Unsupervised

Fingerprint

Dive into the research topics of 'Producing power-law distributions and damping word frequencies with two-stage language models'. Together they form a unique fingerprint.

Cite this