TY - GEN
T1 - Efficient regret minimization in non-convex games
AU - Hazan, Elad
AU - Singh, Karan
AU - Zhang, Cyril
N1 - Publisher Copyright:
Copyright 2017 by the author(s).
PY - 2017
Y1 - 2017
N2 - We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.
AB - We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.
UR - http://www.scopus.com/inward/record.url?scp=85048397247&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85048397247&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85048397247
T3 - 34th International Conference on Machine Learning, ICML 2017
SP - 2278
EP - 2288
BT - 34th International Conference on Machine Learning, ICML 2017
PB - International Machine Learning Society (IMLS)
T2 - 34th International Conference on Machine Learning, ICML 2017
Y2 - 6 August 2017 through 11 August 2017
ER -