Abstract
We present provable algorithms for learning linear representations which are trained in a supervised fashion across a number of tasks. Furthermore, whereas previous methods in the context of multitask learning only allow for generalization within tasks that have already been observed, our representations are both efficiently learnable and accompanied by generalization guarantees to unseen tasks. Our method relies on a certain convex relaxation of a non-convex problem, making it amenable to online learning procedures. We further ensure that a low-rank representation is maintained, and we allow for various trade-offs between sample complexity and per-iteration cost, depending on the choice of algorithm.
Original language | English (US) |
---|---|
Pages (from-to) | 235-246 |
Number of pages | 12 |
Journal | Proceedings of Machine Learning Research |
Volume | 98 |
State | Published - 2019 |
Event | 30th International Conference on Algorithmic Learning Theory, ALT 2019 - Chicago, United States Duration: Mar 22 2019 → Mar 24 2019 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability
Keywords
- Multi-task learning
- generalization bounds
- online learning
- representation learning