Abstract
We generalize Newton-type methods for minimizing smooth functions to handle a sum of two convex functions: a smooth function and a nonsmooth function with a simple proximal mapping. We show that the resulting proximal Newton-type methods inherit the desirable convergence behavior of Newton-type methods for minimizing smooth functions, even when search directions are computed inexactly. Many popular methods tailored to problems arising in bioinformatics, signal processing, and statistical learning are special cases of proximal Newton-type methods, and our analysis yields new convergence results for some of these methods.
Original language | English (US) |
---|---|
Pages (from-to) | 1420-1443 |
Number of pages | 24 |
Journal | SIAM Journal on Optimization |
Volume | 24 |
Issue number | 3 |
DOIs | |
State | Published - 2014 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Software
- Theoretical Computer Science
- Applied Mathematics
Keywords
- Convex optimization
- Nonsmooth optimization
- Proximal mapping