Learning decision rules for pattern classification under a family of probability measures

Sanjeev R. Kulkarni, Mathukumalli Vidyasagar

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

In this paper, uniformly consistent estimation (learnability) of decision rules for pattern classification under a family of probability measures is investigated. In particular, it is shown that uniform boundedness of the metric entropy of the class of decision rules is both necessary and sufficient for learnability under each of two conditions: i) the family of probability measures is totally bounded, with respect to the total variation metric, and ii) the family of probability measures contains an interior point, when equipped with the same metric. In particular, this shows that insofar as uniform consistency is concerned, when the family of distributions contains a total variation neighborhood, nothing is gained by this knowledge about the distribution. Then two sufficient conditions for learnability are presented. Specifically, it is shown that learnability with respect to each of a finite collection of families of probability measures implies learnability with respect to their union; also, learnability with respect to each of a finite number of measures implies learnability with respect to the convex hull of the corresponding families of uniformly absolutely continuous probability measures.

Original languageEnglish (US)
Pages (from-to)154-166
Number of pages13
JournalIEEE Transactions on Information Theory
Volume43
Issue number1
DOIs
StatePublished - 1997

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences

Keywords

  • Class of distributions
  • Decision rules
  • Estimation
  • Metric entropy
  • Pac learning
  • Pattern classification
  • Uniform consistency
  • Vc dimension

Fingerprint

Dive into the research topics of 'Learning decision rules for pattern classification under a family of probability measures'. Together they form a unique fingerprint.

Cite this