Performance guarantees for regularized maximum entropy density estimation

Miroslav Dudik, Steven J. Phillips, Robert E. Schapire

Research output: Contribution to journalConference articlepeer-review

100 Scopus citations

Abstract

We consider the problem of estimating an unknown probability distribution from samples using the principle of maximum entropy (maxent). To alleviate overfitting with a very large number of features, we propose applying the maxent principle with relaxed constraints on the expectations of the features. By convex duality, this turns out to be equivalent to finding the Gibbs distribution minimizing a regularized version of the empirical log loss. We prove non-asymptotic bounds showing that, with respect to the true underlying distribution, this relaxed version of maxent produces density estimates that are almost as good as the best possible. These bounds are in terms of the deviation of the feature empirical averages relative to their true expectations, a number that can be bounded using standard uniform-convergence techniques. In particular, this leads to bounds that drop quickly with the number of samples, and that depend very moderately on the number or complexity of the features. We also derive and prove convergence for both sequential-update and parallel-update algorithms. Finally, we briefly describe experiments on data relevant to the modeling of species geographical distributions.

Original languageEnglish (US)
Pages (from-to)472-486
Number of pages15
JournalLecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)
Volume3120
DOIs
StatePublished - 2004
Externally publishedYes
Event17th Annual Conference on Learning Theory, COLT 2004 - Banff, Canada
Duration: Jul 1 2004Jul 4 2004

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Performance guarantees for regularized maximum entropy density estimation'. Together they form a unique fingerprint.

Cite this