Proximal Newton-type methods for convex optimization

Jason D. Lee, Yuekai Sun, Michael A. Saunders

Research output: Chapter in Book/Report/Conference proceedingConference contribution

53 Scopus citations

Abstract

We seek to solve convex optimization problems in composite form: minimize xεℝn f(x) := g(x) + h(x); where g is convex and continuously differentiable and h : ℝn → ℝ is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such methods are globally convergent and achieve superlinear rates of convergence in the vicinity of an optimal solution. We also demonstrate the performance of these methods using problems of relevance in machine learning and statistics.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 25
Subtitle of host publication26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012
Pages827-835
Number of pages9
StatePublished - 2012
Externally publishedYes
Event26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012 - Lake Tahoe, NV, United States
Duration: Dec 3 2012Dec 6 2012

Publication series

NameAdvances in Neural Information Processing Systems
Volume2
ISSN (Print)1049-5258

Other

Other26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012
Country/TerritoryUnited States
CityLake Tahoe, NV
Period12/3/1212/6/12

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Proximal Newton-type methods for convex optimization'. Together they form a unique fingerprint.

Cite this