TY - GEN
T1 - Interpretable Trajectory Prediction for Autonomous Vehicles via Counterfactual Responsibility
AU - Hsu, Kai Chieh
AU - Leung, Karen
AU - Chen, Yuxiao
AU - Fisac, Jaime F.
AU - Pavone, Marco
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The ability to anticipate surrounding agents' behaviors is critical to enable safe and seamless autonomous vehicles (AVs). While phenomenological methods have successfully predicted future trajectories from scene context, these predictions lack interpretability. On the other hand, ontological approaches assume an underlying structure able to describe the interaction dynamics or agents' internal decision processes. Still, they often suffer from poor scalability or cannot reflect diverse human behaviors. This work proposes an interpretability framework for a phenomenological method through responsibility evaluations. We formulate responsibility as a measure of how much an agent takes into account the welfare of other agents through counterfactual reasoning. Additionally, this framework abstracts the computed responsibility sequences into different responsibility levels and grounds these latent levels into reward functions. The proposed responsibility-based interpretability framework is modular and easily integrated into a wide range of prediction models. To demonstrate the utility of the proposed framework in providing added interpretability, we adapt an existing AV prediction model and perform a simulation study on a real-world nuScenes traffic dataset. Experimental results show that we can perform offline ex-post traffic analysis by incorporating the responsibility signal and rendering interpretable but accurate online trajectory predictions.
AB - The ability to anticipate surrounding agents' behaviors is critical to enable safe and seamless autonomous vehicles (AVs). While phenomenological methods have successfully predicted future trajectories from scene context, these predictions lack interpretability. On the other hand, ontological approaches assume an underlying structure able to describe the interaction dynamics or agents' internal decision processes. Still, they often suffer from poor scalability or cannot reflect diverse human behaviors. This work proposes an interpretability framework for a phenomenological method through responsibility evaluations. We formulate responsibility as a measure of how much an agent takes into account the welfare of other agents through counterfactual reasoning. Additionally, this framework abstracts the computed responsibility sequences into different responsibility levels and grounds these latent levels into reward functions. The proposed responsibility-based interpretability framework is modular and easily integrated into a wide range of prediction models. To demonstrate the utility of the proposed framework in providing added interpretability, we adapt an existing AV prediction model and perform a simulation study on a real-world nuScenes traffic dataset. Experimental results show that we can perform offline ex-post traffic analysis by incorporating the responsibility signal and rendering interpretable but accurate online trajectory predictions.
UR - http://www.scopus.com/inward/record.url?scp=85182523834&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85182523834&partnerID=8YFLogxK
U2 - 10.1109/IROS55552.2023.10341712
DO - 10.1109/IROS55552.2023.10341712
M3 - Conference contribution
AN - SCOPUS:85182523834
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 5918
EP - 5925
BT - 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023
Y2 - 1 October 2023 through 5 October 2023
ER -