TY - GEN
T1 - Characterizing implicit bias in terms of optimization geometry
AU - Gunasekar, Suriya
AU - Lee, Jason
AU - Soudry, Daniel
AU - Srebro, Nathan
N1 - Publisher Copyright:
© 2018 35th International Conference on Machine Learning, ICML 2018. All rights reserved.
PY - 2018
Y1 - 2018
N2 - We study the implicit bias of generic optimization methods, including mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing under determined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by optimization can be characterized in terms of the potential or norm of the optimization geometry, and independently of hypcrparameter choices such as step size and momentum.
AB - We study the implicit bias of generic optimization methods, including mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing under determined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by optimization can be characterized in terms of the potential or norm of the optimization geometry, and independently of hypcrparameter choices such as step size and momentum.
UR - http://www.scopus.com/inward/record.url?scp=85057321382&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057321382&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85057321382
T3 - 35th International Conference on Machine Learning, ICML 2018
SP - 2932
EP - 2955
BT - 35th International Conference on Machine Learning, ICML 2018
A2 - Dy, Jennifer
A2 - Krause, Andreas
PB - International Machine Learning Society (IMLS)
T2 - 35th International Conference on Machine Learning, ICML 2018
Y2 - 10 July 2018 through 15 July 2018
ER -