Abstract
We study interactive learning in a setting where the agent has to generate a response (e.g., an action or trajectory) given a context and an instruction. In contrast, to typical approaches that train the system using reward or expert supervision on response, we study learning with hindsight labeling where a teacher provides an instruction that is most suitable for the agent's generated response. This hindsight labeling of instruction is often easier to provide than providing expert supervision of the optimal response which may require expert knowledge or can be impractical to elicit. We initiate the theoretical analysis of interactive learning with hindsight labeling. We first provide a lower bound showing that in general, the regret of any algorithm must scale with the size of the agent's response space. Next we study a specialized setting where the underlying instruction-response distribution can be decomposed as a low-rank matrix. We introduce an algorithm called LORIL for this setting, and show that it is a no-regret algorithm with the regret scaling with √T and depends on the intrinsic rank but does not depend of the agent's response space. We provide experiments showing the performance of LORIL in practice for 2 domains.
Original language | English (US) |
---|---|
Pages (from-to) | 35829-35850 |
Number of pages | 22 |
Journal | Proceedings of Machine Learning Research |
Volume | 235 |
State | Published - 2024 |
Externally published | Yes |
Event | 41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria Duration: Jul 21 2024 → Jul 27 2024 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability