TY - GEN
T1 - Cost Adaptation for Robust Decentralized Swarm Behaviour
AU - Henderson, Peter
AU - Vertescher, Matthew
AU - Meger, David
AU - Coates, Mark
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/27
Y1 - 2018/12/27
N2 - Decentralized receding horizon control (D-RHC) provides a mechanism for coordination in multiagent settings without a centralized command center. However, combining a set of different goals, costs, and constraints to form an efficient optimization objective for D-RHC can be difficult. To allay this problem, we use a meta-learning process - cost adaptation - which generates the optimization objective for D-RHC to solve based on a set of human-generated priors (cost and constraint functions) and an auxiliary heuristic. We use this adaptive D-RHC method for control of mesh-networked swarm agents. This formulation allows a wide range of tasks to be encoded and can account for network delays, heterogeneous capabilities, and increasingly large swarms through the adaptation mechanism. We leverage the Unity3D game engine to build a simulator capable of introducing artificial networking failures and delays in the swarm. Using the simulator we validate our method on an example coordinated exploration task. We demonstrate that cost adaptation allows for more efficient and safer task completion under varying environment conditions and increasingly large swarm sizes. We release our simulator and code to the community for future work.
AB - Decentralized receding horizon control (D-RHC) provides a mechanism for coordination in multiagent settings without a centralized command center. However, combining a set of different goals, costs, and constraints to form an efficient optimization objective for D-RHC can be difficult. To allay this problem, we use a meta-learning process - cost adaptation - which generates the optimization objective for D-RHC to solve based on a set of human-generated priors (cost and constraint functions) and an auxiliary heuristic. We use this adaptive D-RHC method for control of mesh-networked swarm agents. This formulation allows a wide range of tasks to be encoded and can account for network delays, heterogeneous capabilities, and increasingly large swarms through the adaptation mechanism. We leverage the Unity3D game engine to build a simulator capable of introducing artificial networking failures and delays in the swarm. Using the simulator we validate our method on an example coordinated exploration task. We demonstrate that cost adaptation allows for more efficient and safer task completion under varying environment conditions and increasingly large swarm sizes. We release our simulator and code to the community for future work.
UR - http://www.scopus.com/inward/record.url?scp=85062989527&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062989527&partnerID=8YFLogxK
U2 - 10.1109/IROS.2018.8594283
DO - 10.1109/IROS.2018.8594283
M3 - Conference contribution
AN - SCOPUS:85062989527
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 4099
EP - 4106
BT - 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018
Y2 - 1 October 2018 through 5 October 2018
ER -