TY - GEN
T1 - Statistically Near-Optimal Hypothesis Selection
AU - Bousquet, Olivier
AU - Braverman, Mark
AU - Kol, Gillat
AU - Efremenko, Klim
AU - Moran, Shay
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Hypothesis Selection is a fundamental distribution learning problem where given a comparator-class Q={q1,.., qn} of distributions, and a sampling access to an unknown target distribution p, the goal is to output a distribution q such that TV(p, q) is close to opt, where opt=\min\nolimitsi TV(p, qi) and TV (.,.) denotes the total-variation distance. Despite the fact that this problem has been studied since the 19th century, its complexity in terms of basic resources, such as number of samples and approximation guarantees, remains unsettled (this is discussed, e.g., in the charming book by Devroye and Lugosi '00). This is in stark contrast with other (younger) learning settings, such as PAC learning, for which these complexities are well understood. We derive an optimal 2-approximation learning strategy for the Hypothesis Selection problem, outputting q such that, TV(p, q)≤q 2 opt+varepsilon, with a (nearly) optimal sample complexity of O(log n/2). This is the first algorithm that simultaneously achieves the best approximation factor and sample complexity: previously, Bousquet, Kane, and Moran (COLT '19) gave a learner achieving the optimal 2-approximation, but with an exponentially worse sample complexity of tildeO}(√n2.5), and Yatracos (Annals of Statistics '85) gave a learner with optimal sample complexity of O(log n varepsilon2) but with a sub-optimal approximation factor of 3. We mention that many works in the Density Estimation (a.k.a., Distribution Learning) literature use Hypothesis Selection as a black box subroutine. Our result therefore implies an improvement on the approximation factors obtained by these works, while keeping their sample complexity intact. For example, our result improves the approximation factor of the algorithm of Ashtiani, Ben-David, Harvey, Liaw, and Mehrabian (JACM '20) for agnostic learning of mixtures of gaussians from 9 to 6, while maintaining its nearly-tight sample complexity.
AB - Hypothesis Selection is a fundamental distribution learning problem where given a comparator-class Q={q1,.., qn} of distributions, and a sampling access to an unknown target distribution p, the goal is to output a distribution q such that TV(p, q) is close to opt, where opt=\min\nolimitsi TV(p, qi) and TV (.,.) denotes the total-variation distance. Despite the fact that this problem has been studied since the 19th century, its complexity in terms of basic resources, such as number of samples and approximation guarantees, remains unsettled (this is discussed, e.g., in the charming book by Devroye and Lugosi '00). This is in stark contrast with other (younger) learning settings, such as PAC learning, for which these complexities are well understood. We derive an optimal 2-approximation learning strategy for the Hypothesis Selection problem, outputting q such that, TV(p, q)≤q 2 opt+varepsilon, with a (nearly) optimal sample complexity of O(log n/2). This is the first algorithm that simultaneously achieves the best approximation factor and sample complexity: previously, Bousquet, Kane, and Moran (COLT '19) gave a learner achieving the optimal 2-approximation, but with an exponentially worse sample complexity of tildeO}(√n2.5), and Yatracos (Annals of Statistics '85) gave a learner with optimal sample complexity of O(log n varepsilon2) but with a sub-optimal approximation factor of 3. We mention that many works in the Density Estimation (a.k.a., Distribution Learning) literature use Hypothesis Selection as a black box subroutine. Our result therefore implies an improvement on the approximation factors obtained by these works, while keeping their sample complexity intact. For example, our result improves the approximation factor of the algorithm of Ashtiani, Ben-David, Harvey, Liaw, and Mehrabian (JACM '20) for agnostic learning of mixtures of gaussians from 9 to 6, while maintaining its nearly-tight sample complexity.
UR - http://www.scopus.com/inward/record.url?scp=85127136426&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127136426&partnerID=8YFLogxK
U2 - 10.1109/FOCS52979.2021.00092
DO - 10.1109/FOCS52979.2021.00092
M3 - Conference contribution
AN - SCOPUS:85127136426
T3 - Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
SP - 909
EP - 919
BT - Proceedings - 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science, FOCS 2021
PB - IEEE Computer Society
T2 - 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021
Y2 - 7 February 2022 through 10 February 2022
ER -