TY - GEN

T1 - Adversarial laws of large numbers and optimal regret in online classification

AU - Alon, Noga

AU - Ben-Eliezer, Omri

AU - Dagan, Yuval

AU - Moran, Shay

AU - Naor, Moni

AU - Yogev, Eylon

N1 - Funding Information:
Noga Alon is supported in part by National Science Foundation (NSF) grant DMS-1855464, US-Israel Binational Science Foundation (BSF) grant 2018267, and by the Simons Foundation. Research partially conducted while Omri Ben-Eliezer was at Weizmann Institute of Science, supported in part by a Israel Science Foundation (ISF) grant no. 950/15. Shay Moran is a Robert J. Shillman Fellow and was supported in part by the ISF (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by BSF grant 2018385. Moni Naor is Supported in part by ISF grants (no. 950/15 and 2686/20) and by the Simons Foundation Collaboration on the Theory of Algorithmic Fairness. Incumbent of the Judith Kleeman Professorial Chair. Eylon Yogev is supported in part by ISF grants 484/18, 1789/19, Len Blavatnik and the Blavatnik Foundation, and The Blavatnik Interdisciplinary Cyber Research Center at Tel Aviv University.
Publisher Copyright:
© 2021 ACM.

PY - 2021/6/15

Y1 - 2021/6/15

N2 - Laws of large numbers guarantee that given a large enough sample from some population, the measure of any fixed sub-population is well-estimated by its frequency in the sample. We study laws of large numbers in sampling processes that can affect the environment they are acting upon and interact with it. Specifically, we consider the sequential sampling model proposed by Ben-Eliezer and Yogev (2020), and characterize the classes which admit a uniform law of large numbers in this model: these are exactly the classes that are online learnable. Our characterization may be interpreted as an online analogue to the equivalence between learnability and uniform convergence in statistical (PAC) learning. The sample-complexity bounds we obtain are tight for many parameter regimes, and as an application, we determine the optimal regret bounds in online learning, stated in terms of Littlestone's dimension, thus resolving the main open question from Ben-David, Pál, and Shalev-Shwartz (2009), which was also posed by Rakhlin, Sridharan, and Tewari (2015).

AB - Laws of large numbers guarantee that given a large enough sample from some population, the measure of any fixed sub-population is well-estimated by its frequency in the sample. We study laws of large numbers in sampling processes that can affect the environment they are acting upon and interact with it. Specifically, we consider the sequential sampling model proposed by Ben-Eliezer and Yogev (2020), and characterize the classes which admit a uniform law of large numbers in this model: these are exactly the classes that are online learnable. Our characterization may be interpreted as an online analogue to the equivalence between learnability and uniform convergence in statistical (PAC) learning. The sample-complexity bounds we obtain are tight for many parameter regimes, and as an application, we determine the optimal regret bounds in online learning, stated in terms of Littlestone's dimension, thus resolving the main open question from Ben-David, Pál, and Shalev-Shwartz (2009), which was also posed by Rakhlin, Sridharan, and Tewari (2015).

KW - Littlestone dimension

KW - adversarial robustness

KW - online learning

KW - random sampling

KW - robust sampling

UR - http://www.scopus.com/inward/record.url?scp=85108173431&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85108173431&partnerID=8YFLogxK

U2 - 10.1145/3406325.3451041

DO - 10.1145/3406325.3451041

M3 - Conference contribution

AN - SCOPUS:85108173431

T3 - Proceedings of the Annual ACM Symposium on Theory of Computing

SP - 447

EP - 455

BT - STOC 2021 - Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing

A2 - Khuller, Samir

A2 - Williams, Virginia Vassilevska

PB - Association for Computing Machinery

T2 - 53rd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2021

Y2 - 21 June 2021 through 25 June 2021

ER -