TY - JOUR
T1 - PAC-Bayes control
T2 - learning policies that provably generalize to novel environments
AU - Majumdar, Anirudha
AU - Farid, Alec
AU - Sonar, Anoopkumar
N1 - Funding Information:
The authors were partially supported by the Office of Naval Research (award number N00014-18-1-2873), the National Science Foundation (grant number IIS-1755038), the Google Faculty Research Award, and the Amazon Research Award.
Funding Information:
The authors are grateful to Max Goldstein for initiating the grasping example in Section 7.2 and contributions to the conference version of this article presented at CoRL 2018. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
Publisher Copyright:
© The Author(s) 2020.
PY - 2021/2
Y1 - 2021/2
N2 - Our goal is to learn control policies for robots that provably generalize well to novel environments given a dataset of example environments. The key technical idea behind our approach is to leverage tools from generalization theory in machine learning by exploiting a precise analogy (which we present in the form of a reduction) between generalization of control policies to novel environments and generalization of hypotheses in the supervised learning setting. In particular, we utilize the probably approximately correct (PAC)-Bayes framework, which allows us to obtain upper bounds that hold with high probability on the expected cost of (stochastic) control policies across novel environments. We propose policy learning algorithms that explicitly seek to minimize this upper bound. The corresponding optimization problem can be solved using convex optimization (relative entropy programming in particular) in the setting where we are optimizing over a finite policy space. In the more general setting of continuously parameterized policies (e.g., neural network policies), we minimize this upper bound using stochastic gradient descent. We present simulated results of our approach applied to learning (1) reactive obstacle avoidance policies and (2) neural network-based grasping policies. We also present hardware results for the Parrot Swing drone navigating through different obstacle environments. Our examples demonstrate the potential of our approach to provide strong generalization guarantees for robotic systems with continuous state and action spaces, complicated (e.g., nonlinear) dynamics, rich sensory inputs (e.g., depth images), and neural network-based policies.
AB - Our goal is to learn control policies for robots that provably generalize well to novel environments given a dataset of example environments. The key technical idea behind our approach is to leverage tools from generalization theory in machine learning by exploiting a precise analogy (which we present in the form of a reduction) between generalization of control policies to novel environments and generalization of hypotheses in the supervised learning setting. In particular, we utilize the probably approximately correct (PAC)-Bayes framework, which allows us to obtain upper bounds that hold with high probability on the expected cost of (stochastic) control policies across novel environments. We propose policy learning algorithms that explicitly seek to minimize this upper bound. The corresponding optimization problem can be solved using convex optimization (relative entropy programming in particular) in the setting where we are optimizing over a finite policy space. In the more general setting of continuously parameterized policies (e.g., neural network policies), we minimize this upper bound using stochastic gradient descent. We present simulated results of our approach applied to learning (1) reactive obstacle avoidance policies and (2) neural network-based grasping policies. We also present hardware results for the Parrot Swing drone navigating through different obstacle environments. Our examples demonstrate the potential of our approach to provide strong generalization guarantees for robotic systems with continuous state and action spaces, complicated (e.g., nonlinear) dynamics, rich sensory inputs (e.g., depth images), and neural network-based policies.
KW - Learning-based control
KW - generalization
KW - safety
UR - http://www.scopus.com/inward/record.url?scp=85092087418&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092087418&partnerID=8YFLogxK
U2 - 10.1177/0278364920959444
DO - 10.1177/0278364920959444
M3 - Article
AN - SCOPUS:85092087418
SN - 0278-3649
VL - 40
SP - 574
EP - 593
JO - International Journal of Robotics Research
JF - International Journal of Robotics Research
IS - 2-3
ER -