Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning

Brian Bullins, Elad Hazan, Adam Kalai, Roi Livni

Research output: Contribution to journalConference articlepeer-review

14 Scopus citations

Abstract

We present provable algorithms for learning linear representations which are trained in a supervised fashion across a number of tasks. Furthermore, whereas previous methods in the context of multitask learning only allow for generalization within tasks that have already been observed, our representations are both efficiently learnable and accompanied by generalization guarantees to unseen tasks. Our method relies on a certain convex relaxation of a non-convex problem, making it amenable to online learning procedures. We further ensure that a low-rank representation is maintained, and we allow for various trade-offs between sample complexity and per-iteration cost, depending on the choice of algorithm.

Original languageEnglish (US)
Pages (from-to)235-246
Number of pages12
JournalProceedings of Machine Learning Research
Volume98
StatePublished - 2019
Event30th International Conference on Algorithmic Learning Theory, ALT 2019 - Chicago, United States
Duration: Mar 22 2019Mar 24 2019

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • Multi-task learning
  • generalization bounds
  • online learning
  • representation learning

Fingerprint

Dive into the research topics of 'Generalize Across Tasks: Efficient Algorithms for Linear Representation Learning'. Together they form a unique fingerprint.

Cite this