TY - JOUR
T1 - Submultiplicative Glivenko-Cantelli and uniform convergence of revenues
AU - Alon, Noga
AU - Babaioff, Moshe
AU - Gonczarowski, Yannai A.
AU - Mansour, Yishay
AU - Moran, Shay
AU - Yehudayoff, Amir
N1 - Funding Information:
The research of Noga Alon is supported in part by an ISF grant and by a GIF grant. Yannai Gonczarowski is supported by the Adams Fellowship Program of the Israel Academy of Sciences and Humanities; his work is supported by ISF grant 1435/14 administered by the Israeli Academy of Sciences and by Israel-USA Bi-national Science Foundation (BSF) grant number 2014389; this project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 740282). The research of Yishay Mansour was supported in part by The Israeli Centers of Research Excellence (I-CORE) program (Center No. 4/11), by a grant from the Israel Science Foundation, and by a grant from United States-Israel Binational Science Foundation (BSF); the research was done while author was co-affiliated with Microsoft Research. The research of Shay Moran is supported by the National Science Foundations and the Simons Foundations; part of the research was done while author was co-affiliated with Microsoft Research. The research of Amir Yehudayoff is supported by ISF grant 1162/15.
Publisher Copyright:
© 2017 Neural information processing systems foundation. All rights reserved.
PY - 2017
Y1 - 2017
N2 - In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution. Our variant allows for tighter convergence bounds for extreme values of the CDF. We apply our bound in the context of revenue learning, which is a well-studied problem in economics and algorithmic game theory. We derive sample-complexity bounds on the uniform convergence rate of the empirical revenues to the true revenues, assuming a bound on the kth moment of the valuations, for any (possibly fractional) k > 1. For uniform convergence in the limit, we give a complete characterization and a zero-one law: if the first moment of the valuations is finite, then uniform convergence almost surely occurs; conversely, if the first moment is infinite, then uniform convergence almost never occurs.
AB - In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution. Our variant allows for tighter convergence bounds for extreme values of the CDF. We apply our bound in the context of revenue learning, which is a well-studied problem in economics and algorithmic game theory. We derive sample-complexity bounds on the uniform convergence rate of the empirical revenues to the true revenues, assuming a bound on the kth moment of the valuations, for any (possibly fractional) k > 1. For uniform convergence in the limit, we give a complete characterization and a zero-one law: if the first moment of the valuations is finite, then uniform convergence almost surely occurs; conversely, if the first moment is infinite, then uniform convergence almost never occurs.
UR - http://www.scopus.com/inward/record.url?scp=85046999360&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046999360&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85046999360
SN - 1049-5258
VL - 2017-December
SP - 1657
EP - 1666
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 31st Annual Conference on Neural Information Processing Systems, NIPS 2017
Y2 - 4 December 2017 through 9 December 2017
ER -