Abstract
We consider Bandits with Knapsacks (henceforth, BwK), a general model for multi-Armed bandits under supply/budget constraints. In particular, a bandit algorithm needs to solve a well-known knapsack problem: find an optimal packing of items into a limited-size knapsack. The BwK problem is a common generalization of numerous motivating examples, which range from dynamic pricing to repeated auctions to dynamic ad allocation to network routing and scheduling. While the prior work on BwK focused on the stochastic version, we pioneer the other extreme in which the outcomes can be chosen adversarially. This is a considerably harder problem, compared to both the stochastic version and the "classic"adversarial bandits, in that regret minimization is no longer feasible. Instead, the objective is to minimize the competitive ratio: The ratio of the benchmark reward to algorithm's reward.We design an algorithm with competitive ratio O(log T) relative to the best fixed distribution over actions, where T is the time horizon; we also prove a matching lower bound. The key conceptual contribution is a new perspective on the stochastic version of the problem. We suggest a new algorithm for the stochastic version, which builds on the framework of regret minimization in repeated games and admits a substantially simpler analysis compared to prior work. We then analyze this algorithm for the adversarial version, and use it as a subroutine to solve the latter.Our algorithm is the first "black-box reduction"from bandits to BwK: it takes an arbitrary bandit algorithm and uses it as a subroutine. We use this reduction to derive several extensions.
Original language | English (US) |
---|---|
Article number | 40 |
Journal | Journal of the ACM |
Volume | 69 |
Issue number | 6 |
DOIs | |
State | Published - Nov 17 2022 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Information Systems
- Hardware and Architecture
- Artificial Intelligence
Keywords
- Multi-Armed bandits
- adversarial bandits
- bandits with knapsacks
- competitive ratio
- primal-dual algorithms
- regret