TY - GEN
T1 - Verification of recurrent neural networks for cognitive tasks via reachability analysis
AU - Zhang, Hongce
AU - Shinn, Maxwell
AU - Gupta, Aarti
AU - Gurfinkel, Arie
AU - Le, Nham
AU - Narodytska, Nina
N1 - Publisher Copyright:
© 2020 The authors and IOS Press.
PY - 2020/8/24
Y1 - 2020/8/24
N2 - Recurrent Neural Networks (RNNs) are one of the most successful neural network architectures that deal with temporal sequences, e.g., speech and text recognition. Recently, RNNs have been shown to be useful in cognitive neuroscience as a model of decision-making. RNNs can be trained to solve the same behavioral tasks performed by humans and other animals in decision-making experiments, allowing for a direct comparison between networks and experimental subjects. Analysis of RNNs is expected to be a simpler problem than the analysis of neural activity. However, in practice, reasoning about an RNN's behaviour is a challenging problem. In this work, we take an approach based on formal verification for the analysis of RNNs. We make two main contributions. First, we consider the cognitive domain and formally define a set of useful properties to analyse for a popular experimental task. Second, we employ and adapt wellknown verification techniques for reachability analysis to our focus domain, i.e., polytope propagation, invariant detection, and counter-example-guided abstraction refinement. Our experiments show that our techniques can effectively solve classes of benchmark problems that are challenging for state-of-the-art verification tools.
AB - Recurrent Neural Networks (RNNs) are one of the most successful neural network architectures that deal with temporal sequences, e.g., speech and text recognition. Recently, RNNs have been shown to be useful in cognitive neuroscience as a model of decision-making. RNNs can be trained to solve the same behavioral tasks performed by humans and other animals in decision-making experiments, allowing for a direct comparison between networks and experimental subjects. Analysis of RNNs is expected to be a simpler problem than the analysis of neural activity. However, in practice, reasoning about an RNN's behaviour is a challenging problem. In this work, we take an approach based on formal verification for the analysis of RNNs. We make two main contributions. First, we consider the cognitive domain and formally define a set of useful properties to analyse for a popular experimental task. Second, we employ and adapt wellknown verification techniques for reachability analysis to our focus domain, i.e., polytope propagation, invariant detection, and counter-example-guided abstraction refinement. Our experiments show that our techniques can effectively solve classes of benchmark problems that are challenging for state-of-the-art verification tools.
UR - http://www.scopus.com/inward/record.url?scp=85091790127&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091790127&partnerID=8YFLogxK
U2 - 10.3233/FAIA200281
DO - 10.3233/FAIA200281
M3 - Conference contribution
AN - SCOPUS:85091790127
T3 - Frontiers in Artificial Intelligence and Applications
SP - 1690
EP - 1697
BT - ECAI 2020 - 24th European Conference on Artificial Intelligence, including 10th Conference on Prestigious Applications of Artificial Intelligence, PAIS 2020 - Proceedings
A2 - De Giacomo, Giuseppe
A2 - Catala, Alejandro
A2 - Dilkina, Bistra
A2 - Milano, Michela
A2 - Barro, Senen
A2 - Bugarin, Alberto
A2 - Lang, Jerome
PB - IOS Press BV
T2 - 24th European Conference on Artificial Intelligence, ECAI 2020, including 10th Conference on Prestigious Applications of Artificial Intelligence, PAIS 2020
Y2 - 29 August 2020 through 8 September 2020
ER -