TY - GEN

T1 - Adaptivity gaps for stochastic probing

T2 - 28th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017

AU - Gupta, Anupam

AU - Nagarajan, Viswanath

AU - Singla, Sahil

N1 - Funding Information:
Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA. Research partly supported by NSF awards CCF-1319811, CCF-1536002, CCF-1540541 and CCF- 1617790.

PY - 2017

Y1 - 2017

N2 - Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for such problems under both monotone and non-monotone submodular functions. We consider these problems in a stochastic setting, where elements are not all active and we only get value from active elements. Each element e is active independently with some known probability pe, but we don't know the element's status a priori: we find it out only when we probe the element e. Moreover, the sequence of elements we probe must satisfy a given prefix-closed constraint, e.g., matroid, orienteering, deadline, precedence, or any downward-closed constraint. In this paper we study the gap between adaptive and non-adaptive strategies for f being a submodular or a fractionally subadditive (XOS) function. If this gap is small, we can focus on finding good non-adaptive strategies instead, which are easier to find as well as to represent. We show that the adaptivity gap is a constant for monotone and non-monotone submodular functions, and logarithmic for XOS functions of small width. These bounds are nearly tight. Our techniques show new ways of arguing about the optimal adaptive decision tree for stochastic optimization problems.

AB - Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for such problems under both monotone and non-monotone submodular functions. We consider these problems in a stochastic setting, where elements are not all active and we only get value from active elements. Each element e is active independently with some known probability pe, but we don't know the element's status a priori: we find it out only when we probe the element e. Moreover, the sequence of elements we probe must satisfy a given prefix-closed constraint, e.g., matroid, orienteering, deadline, precedence, or any downward-closed constraint. In this paper we study the gap between adaptive and non-adaptive strategies for f being a submodular or a fractionally subadditive (XOS) function. If this gap is small, we can focus on finding good non-adaptive strategies instead, which are easier to find as well as to represent. We show that the adaptivity gap is a constant for monotone and non-monotone submodular functions, and logarithmic for XOS functions of small width. These bounds are nearly tight. Our techniques show new ways of arguing about the optimal adaptive decision tree for stochastic optimization problems.

UR - http://www.scopus.com/inward/record.url?scp=85016189747&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85016189747&partnerID=8YFLogxK

U2 - 10.1137/1.9781611974782.111

DO - 10.1137/1.9781611974782.111

M3 - Conference contribution

AN - SCOPUS:85016189747

T3 - Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms

SP - 1688

EP - 1702

BT - 28th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017

A2 - Klein, Philip N.

PB - Association for Computing Machinery

Y2 - 16 January 2017 through 19 January 2017

ER -