TY - GEN
T1 - Distill
T2 - 20th IEEE/ACM International Symposium on Code Generation and Optimization, CGO 2022
AU - Vesely, Jan
AU - Pothukuchi, Raghavendra Pradyumna
AU - Joshi, Ketaki
AU - Gupta, Samyak
AU - Cohen, Jonathan D.
AU - Bhattacharjee, Abhishek
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Computational models of cognition enable a better understanding of the human brain and behavior, psychiatric and neurological illnesses, clinical interventions to treat illnesses, and also offer a path towards human-like artificial intelligence. Cognitive models are also, however, laborious to develop, requiring composition of many types of computational tasks, and suffer from poor performance as they are generally designed using high-level languages like Python. In this work, we present Distill, a domain-specific compilation tool to accelerate cognitive models while continuing to offer cognitive scientists the ability to develop their models in flexible high-level languages. Distill uses domain-specific knowledge to compile Python-based cognitive models into LLVM IR, carefully stripping away features like dynamic typing and memory management that add performance overheads without being necessary for the underlying computation of the models. The net effect is an average of 27 × performance improvement in model execution over state-of-The-Art techniques using Pyston and PyPy. Distill also repurposes classical compiler data flow analyses to reveal properties about data flow in cognitive models that are useful to cognitive scientists. Distill is publicly available, integrated in the PsyNeuLink cognitive modeling environment, and is already being used by researchers in the brain sciences.
AB - Computational models of cognition enable a better understanding of the human brain and behavior, psychiatric and neurological illnesses, clinical interventions to treat illnesses, and also offer a path towards human-like artificial intelligence. Cognitive models are also, however, laborious to develop, requiring composition of many types of computational tasks, and suffer from poor performance as they are generally designed using high-level languages like Python. In this work, we present Distill, a domain-specific compilation tool to accelerate cognitive models while continuing to offer cognitive scientists the ability to develop their models in flexible high-level languages. Distill uses domain-specific knowledge to compile Python-based cognitive models into LLVM IR, carefully stripping away features like dynamic typing and memory management that add performance overheads without being necessary for the underlying computation of the models. The net effect is an average of 27 × performance improvement in model execution over state-of-The-Art techniques using Pyston and PyPy. Distill also repurposes classical compiler data flow analyses to reveal properties about data flow in cognitive models that are useful to cognitive scientists. Distill is publicly available, integrated in the PsyNeuLink cognitive modeling environment, and is already being used by researchers in the brain sciences.
KW - Domain-specific compilation
KW - JIT compilers
KW - Python.
KW - cognitive models
KW - human brain
UR - http://www.scopus.com/inward/record.url?scp=85128414812&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85128414812&partnerID=8YFLogxK
U2 - 10.1109/CGO53902.2022.9741278
DO - 10.1109/CGO53902.2022.9741278
M3 - Conference contribution
AN - SCOPUS:85128414812
T3 - CGO 2022 - Proceedings of the 2022 IEEE/ACM International Symposium on Code Generation and Optimization
SP - 301
EP - 312
BT - CGO 2022 - Proceedings of the 2022 IEEE/ACM International Symposium on Code Generation and Optimization
A2 - Lee, Jae W.
A2 - Hack, Sebastian
A2 - Shpeisman, Tatiana
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 2 April 2022 through 6 April 2022
ER -