Defining admissible rewards for high-confidence policy evaluation in batch reinforcement learning

Niranjani Prasad, Barbara Engelhardt, Finale Doshi-Velez

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A key impediment to reinforcement learning (RL) in real applications with limited, batch data is in defining a reward function that reflects what we implicitly know about reasonable behaviour for a task and allows for robust off-policy evaluation. In this work, we develop a method to identify an admissible set of reward functions for policies that (a) do not deviate too far in performance from prior behaviour, and (b) can be evaluated with high confidence, given only a collection of past trajectories. Together, these ensure that we avoid proposing unreasonable policies in high-risk settings. We demonstrate our approach to reward design on synthetic domains as well as in a critical care context, to guide the design of a reward function that consolidates clinical objectives to learn a policy for weaning patients from mechanical ventilation.

Original languageEnglish (US)
Title of host publicationACM CHIL 2020 - Proceedings of the 2020 ACM Conference on Health, Inference, and Learning
PublisherAssociation for Computing Machinery, Inc
Pages1-9
Number of pages9
ISBN (Electronic)9781450370462
DOIs
StatePublished - Feb 4 2020
Event2020 ACM Conference on Health, Inference, and Learning, CHIL 2020 - Toronto, Canada
Duration: Apr 2 2020Apr 4 2020

Publication series

NameACM CHIL 2020 - Proceedings of the 2020 ACM Conference on Health, Inference, and Learning

Conference

Conference2020 ACM Conference on Health, Inference, and Learning, CHIL 2020
CountryCanada
CityToronto
Period4/2/204/4/20

All Science Journal Classification (ASJC) codes

  • Public Health, Environmental and Occupational Health
  • Education
  • Health(social science)

Keywords

  • Off-policy evaluation
  • Reinforcement learning
  • Reward design

Fingerprint Dive into the research topics of 'Defining admissible rewards for high-confidence policy evaluation in batch reinforcement learning'. Together they form a unique fingerprint.

  • Cite this

    Prasad, N., Engelhardt, B., & Doshi-Velez, F. (2020). Defining admissible rewards for high-confidence policy evaluation in batch reinforcement learning. In ACM CHIL 2020 - Proceedings of the 2020 ACM Conference on Health, Inference, and Learning (pp. 1-9). (ACM CHIL 2020 - Proceedings of the 2020 ACM Conference on Health, Inference, and Learning). Association for Computing Machinery, Inc. https://doi.org/10.1145/3368555.3384450