TY - JOUR
T1 - Local maxima in the likelihood of Gaussian Mixture Models
T2 - 30th Annual Conference on Neural Information Processing Systems, NIPS 2016
AU - Jin, Chi
AU - Zhang, Yuchen
AU - Balakrishnan, Sivaraman
AU - Wainwright, Martin J.
AU - Jordan, Michael I.
N1 - Funding Information:
This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air Force Office of Scientific Research Grant AFOSR-FA9550-14-1-001, the Mathematical Data Science program of the Office of Naval Research under grant number N00014-15-1-2670, and National Science Foundation Grant CIF-31712-23800.
Publisher Copyright:
© 2016 NIPS Foundation - All Rights Reserved.
PY - 2016
Y1 - 2016
N2 - We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M ≥ 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [2007]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1 - e-Ω(M). We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings.
AB - We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M ≥ 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [2007]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1 - e-Ω(M). We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings.
UR - http://www.scopus.com/inward/record.url?scp=85019182259&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85019182259&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85019182259
SN - 1049-5258
SP - 4123
EP - 4131
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 5 December 2016 through 10 December 2016
ER -