TY - GEN
T1 - Algorithms and adaptivity gaps for stochastic k-TSP
AU - Jiang, Haotian
AU - Li, Jian
AU - Liu, Daogao
AU - Singla, Sahil
N1 - Publisher Copyright:
© Haotian Jiang, Jian Li, Daogao Liu, and Sahil Singla.
PY - 2020/1
Y1 - 2020/1
N2 - Given a metric (V, d) and a root ∈ V , the classic k-TSP problem is to find a tour originating at the root of minimum length that visits at least k nodes in V . In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of k-TSP. In Stoch-Reward k-TSP, originally defined by Ene-Nagarajan-Saket [13], each vertex v in the given metric (V, d) contains a stochastic reward Rv. The goal is to adaptively find a tour of minimum expected length that collects at least reward k; here “adaptively” means our next decision may depend on previous outcomes. Ene et al. give an O(log k)-approximation adaptive algorithm for this problem, and left open if there is an O(1)-approximation algorithm. We totally resolve their open question, and even give an O(1)-approximation non-adaptive algorithm for Stoch-Reward k-TSP. We also introduce and obtain similar results for the Stoch-Cost k-TSP problem. In this problem each vertex v has a stochastic cost Cv, and the goal is to visit and select at least k vertices to minimize the expected sum of tour length and cost of selected vertices. Besides being a natural stochastic generalization of k-TSP, this problem is also interesting because it generalizes the Price of Information framework [33] from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: “repetitions” and “critical scaling”. In general, replacing a random variable with its expectation leads to very poor results. We show that for our problems, if we truncate the random variables at an ideal threshold, then their expected values form a good surrogate. Here, we rely on running several repetitions of our algorithm with the same threshold, and then argue concentration using Freedman’s and Jogdeo-Samuels’ inequalities. Unfortunately, this ideal threshold depends on how far we are from achieving our target k, which a non-adaptive algorithm does not know. To overcome this barrier, we truncate the random variables at various different scales and identify a “critical” scale.
AB - Given a metric (V, d) and a root ∈ V , the classic k-TSP problem is to find a tour originating at the root of minimum length that visits at least k nodes in V . In this work, motivated by applications where the input to an optimization problem is uncertain, we study two stochastic versions of k-TSP. In Stoch-Reward k-TSP, originally defined by Ene-Nagarajan-Saket [13], each vertex v in the given metric (V, d) contains a stochastic reward Rv. The goal is to adaptively find a tour of minimum expected length that collects at least reward k; here “adaptively” means our next decision may depend on previous outcomes. Ene et al. give an O(log k)-approximation adaptive algorithm for this problem, and left open if there is an O(1)-approximation algorithm. We totally resolve their open question, and even give an O(1)-approximation non-adaptive algorithm for Stoch-Reward k-TSP. We also introduce and obtain similar results for the Stoch-Cost k-TSP problem. In this problem each vertex v has a stochastic cost Cv, and the goal is to visit and select at least k vertices to minimize the expected sum of tour length and cost of selected vertices. Besides being a natural stochastic generalization of k-TSP, this problem is also interesting because it generalizes the Price of Information framework [33] from deterministic probing costs to metric probing costs. Our techniques are based on two crucial ideas: “repetitions” and “critical scaling”. In general, replacing a random variable with its expectation leads to very poor results. We show that for our problems, if we truncate the random variables at an ideal threshold, then their expected values form a good surrogate. Here, we rely on running several repetitions of our algorithm with the same threshold, and then argue concentration using Freedman’s and Jogdeo-Samuels’ inequalities. Unfortunately, this ideal threshold depends on how far we are from achieving our target k, which a non-adaptive algorithm does not know. To overcome this barrier, we truncate the random variables at various different scales and identify a “critical” scale.
KW - Approximation algorithms
KW - Stochastic optimization
KW - Travelling salesman problem
UR - http://www.scopus.com/inward/record.url?scp=85078065960&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078065960&partnerID=8YFLogxK
U2 - 10.4230/LIPIcs.ITCS.2020.45
DO - 10.4230/LIPIcs.ITCS.2020.45
M3 - Conference contribution
AN - SCOPUS:85078065960
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - 11th Innovations in Theoretical Computer Science Conference, ITCS 2020
A2 - Vidick, Thomas
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 11th Innovations in Theoretical Computer Science Conference, ITCS 2020
Y2 - 12 January 2020 through 14 January 2020
ER -