A nearest-neighbor approach to estimating divergence between continuous random vectors

Qing Wang, Sanjeev R. Kulkarni, Sergio Verdú

Research output: Chapter in Book/Report/Conference proceedingConference contribution

48 Scopus citations

Abstract

A method for divergence estimation between multidimensional distributions based on nearest neighbor distances is proposed. Given i.i.d. samples, both the bias and the variance of this estimator are proven to vanish as sample sizes go to infinity. In experiments on high-dimensional data, the nearest neighbor approach generally exhibits faster convergence compared to previous algorithms based on partitioning.

Original languageEnglish (US)
Title of host publicationProceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006
Pages242-246
Number of pages5
DOIs
StatePublished - Dec 1 2006
Event2006 IEEE International Symposium on Information Theory, ISIT 2006 - Seattle, WA, United States
Duration: Jul 9 2006Jul 14 2006

Publication series

NameIEEE International Symposium on Information Theory - Proceedings
ISSN (Print)2157-8101

Other

Other2006 IEEE International Symposium on Information Theory, ISIT 2006
CountryUnited States
CitySeattle, WA
Period7/9/067/14/06

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Information Systems
  • Modeling and Simulation
  • Applied Mathematics

Fingerprint Dive into the research topics of 'A nearest-neighbor approach to estimating divergence between continuous random vectors'. Together they form a unique fingerprint.

  • Cite this

    Wang, Q., Kulkarni, S. R., & Verdú, S. (2006). A nearest-neighbor approach to estimating divergence between continuous random vectors. In Proceedings - 2006 IEEE International Symposium on Information Theory, ISIT 2006 (pp. 242-246). [4035959] (IEEE International Symposium on Information Theory - Proceedings). https://doi.org/10.1109/ISIT.2006.261842