TY - GEN
T1 - On Infinite Separations Between Simple and Optimal Mechanisms
AU - Psomas, Alexandros
AU - Schvartzman, Ariel
AU - Weinberg, S. Matthew
N1 - Publisher Copyright:
© 2022 Neural information processing systems foundation. All rights reserved.
PY - 2022
Y1 - 2022
N2 - We consider a revenue-maximizing seller with k heterogeneous items for sale to a single additive buyer, whose values are drawn from a known, possibly correlated prior D. It is known that there exist priors D such that simple mechanisms - those with bounded menu complexity - extract an arbitrarily small fraction of the optimal revenue ([BCKW15, HN19]). This paper considers the opposite direction: given a correlated distribution D witnessing an infinite separation between simple and optimal mechanisms, what can be said about D? [HN19] provides a framework for constructing such D: it takes as input a sequence of k-dimensional vectors satisfying some geometric property, and produces a D witnessing an infinite gap. Our first main result establishes that this framework is without loss: every D witnessing an infinite separation could have resulted from this framework. An earlier version of their work provided a more streamlined framework [HN13]. Our second main result establishes that this restrictive framework is not tight. That is, we provide an instance D witnessing an infinite gap, but which provably could not have resulted from the restrictive framework. As a corollary, we discover a new kind of mechanism which can witness these infinite separations on instances where the previous “aligned” mechanisms do not.
AB - We consider a revenue-maximizing seller with k heterogeneous items for sale to a single additive buyer, whose values are drawn from a known, possibly correlated prior D. It is known that there exist priors D such that simple mechanisms - those with bounded menu complexity - extract an arbitrarily small fraction of the optimal revenue ([BCKW15, HN19]). This paper considers the opposite direction: given a correlated distribution D witnessing an infinite separation between simple and optimal mechanisms, what can be said about D? [HN19] provides a framework for constructing such D: it takes as input a sequence of k-dimensional vectors satisfying some geometric property, and produces a D witnessing an infinite gap. Our first main result establishes that this framework is without loss: every D witnessing an infinite separation could have resulted from this framework. An earlier version of their work provided a more streamlined framework [HN13]. Our second main result establishes that this restrictive framework is not tight. That is, we provide an instance D witnessing an infinite gap, but which provably could not have resulted from the restrictive framework. As a corollary, we discover a new kind of mechanism which can witness these infinite separations on instances where the previous “aligned” mechanisms do not.
UR - http://www.scopus.com/inward/record.url?scp=85163149000&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85163149000&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85163149000
T3 - Advances in Neural Information Processing Systems
BT - Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Koyejo, S.
A2 - Mohamed, S.
A2 - Agarwal, A.
A2 - Belgrave, D.
A2 - Cho, K.
A2 - Oh, A.
PB - Neural information processing systems foundation
T2 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Y2 - 28 November 2022 through 9 December 2022
ER -