Affine-invariant online optimization and the low-rank experts problem

Tomer Koren, Roi Livni

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

We present a new affine-invariant optimization algorithm called Online Lazy Newton. The regret of Online Lazy Newton is independent of conditioning: the algorithm's performance depends on the best possible preconditioning of the problem in retrospect and on its intrinsic dimensionality. As an application, we show how Online Lazy Newton can be used to achieve an optimal regret of order √rT for the low-rank experts problem, improving by a √r factor over the previously best known bound and resolving an open problem posed by Hazan et al. [15].

Original languageEnglish (US)
Pages (from-to)4748-4756
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2017-December
StatePublished - 2017
Event31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States
Duration: Dec 4 2017Dec 9 2017

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Affine-invariant online optimization and the low-rank experts problem'. Together they form a unique fingerprint.

Cite this