Finite-sum composition optimization via variance reduced gradient descent

Xiangru Lian, Mengdi Wang, Ji Liu

Research output: Contribution to conferencePaperpeer-review

Abstract

The stochastic composition optimization proposed recently by Wang et al. [2014] minimizes the objective with the compositional expectation form: minx (EiFi o EjGj)(x). It summarizes many important applications in machine learning, statistics, and finance. In this paper, we consider the finite-sum scenario for composition optimization: (Formula presented.). We propose two algorithms to solve this problem by combining the stochastic compositional gradient descent (SCGD) and the stochastic variance reduced gradient (SVRG) technique. A constant linear convergence rate is proved for strongly convex optimization, which substantially improves the sublinear rate O(K−0.8) of the best known algorithm.

Original languageEnglish (US)
StatePublished - Jan 1 2017
Event20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017 - Fort Lauderdale, United States
Duration: Apr 20 2017Apr 22 2017

Conference

Conference20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017
CountryUnited States
CityFort Lauderdale
Period4/20/174/22/17

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Statistics and Probability

Fingerprint Dive into the research topics of 'Finite-sum composition optimization via variance reduced gradient descent'. Together they form a unique fingerprint.

Cite this