Abstract
We study decision making when each action is described by a set of objectives, all of which are to be maximized. During the training phase, we have access to the actions of an outside agent ("mentor"). In the test phase, our goal is to maximally improve upon the mentor's (unobserved) actions across all objectives. We present an algorithm with a vanishing regret compared with the optimal possible improvement, and show that our regret bound is the best possible. The bound is independent of the number of actions, and scales only as the logarithm of the number of objectives.
Original language | English (US) |
---|---|
Pages (from-to) | 726-741 |
Number of pages | 16 |
Journal | Journal of Machine Learning Research |
Volume | 35 |
State | Published - 2014 |
Externally published | Yes |
Event | 27th Conference on Learning Theory, COLT 2014 - Barcelona, Spain Duration: Jun 13 2014 → Jun 15 2014 |
All Science Journal Classification (ASJC) codes
- Software
- Artificial Intelligence
- Control and Systems Engineering
- Statistics and Probability
Keywords
- Apprenticeship learning
- Multi-objective learning
- Random matrix games