Practical large-scale optimization for max-norm regularization

Jason Lee, Benjamin Recht, Ruslan Salakhutdinov, Nathan Srebro, Joel A. Tropp

Research output: Chapter in Book/Report/Conference proceedingConference contribution

95 Scopus citations

Abstract

The max-norm was proposed as a convex matrix regularizer in [1] and was shown to be empirically superior to the trace-norm for collaborative filtering problems. Although the max-norm can be computed in polynomial time, there are currently no practical algorithms for solving large-scale optimization problems that incorporate the max-norm. The present work uses a factorization technique of Burer and Monteiro [2] to devise scalable first-order algorithms for convex programs involving the max-norm. These algorithms are applied to solve huge collaborative filtering, graph cut, and clustering problems. Empirically, the new methods outperform mature techniques from all three areas.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 23
Subtitle of host publication24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010
StatePublished - Dec 1 2010
Externally publishedYes
Event24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010 - Vancouver, BC, Canada
Duration: Dec 6 2010Dec 9 2010

Publication series

NameAdvances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010

Other

Other24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010
Country/TerritoryCanada
CityVancouver, BC
Period12/6/1012/9/10

All Science Journal Classification (ASJC) codes

  • Information Systems

Fingerprint

Dive into the research topics of 'Practical large-scale optimization for max-norm regularization'. Together they form a unique fingerprint.

Cite this