TY - JOUR
T1 - A graph-theoretic approach to multitasking
AU - Alon, Noga
AU - Reichman, Daniel
AU - Shinkar, Igor
AU - Wagner, Tal
AU - Musslick, Sebastian
AU - Cohen, Jonathan D.
AU - Griffiths, Thomas L.
AU - Dey, Biswadip
AU - Ozcimder, Kayhan
N1 - Funding Information:
∗Equal contribution. †Equal contribution. Supported by DARPA contract N66001-15-2-4048, Value Alignment in Autonomous Systems and Grant: 2014-1600, Sponsor: William and Flora Hewlett Foundation, Project Title: Cybersecurity and Internet Policy ‡This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation
Publisher Copyright:
© 2017 Neural information processing systems foundation. All rights reserved.
PY - 2017
Y1 - 2017
N2 - A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of representations. However, how the richness of such interactions trades off against the ability of a network to simultaneously carry out multiple independent processes - a salient limitation in many domains of human cognition - remains largely unexplored. In this paper we use a graph-theoretic analysis of network architecture to address this question, where tasks are represented as edges in a bipartite graph G = (A ⊃ B,E). We define a new measure of multitasking capacity of such networks, based on the assumptions that tasks that need to be multitasked rely on independent resources, i.e., form a matching, and that tasks can be multitasked without interference if they form an induced matching. Our main result is an inherent tradeoff between the multitasking capacity and the average degree of the network that holds regardless of the network architecture. These results are also extended to networks of depth greater than 2. On the positive side, we demonstrate that networks that are random-like (e.g., locally sparse) can have desirable multitasking properties. Our results shed light into the parallel-processing limitations of neural systems and provide insights that may be useful for the analysis and design of parallel architectures.
AB - A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of representations. However, how the richness of such interactions trades off against the ability of a network to simultaneously carry out multiple independent processes - a salient limitation in many domains of human cognition - remains largely unexplored. In this paper we use a graph-theoretic analysis of network architecture to address this question, where tasks are represented as edges in a bipartite graph G = (A ⊃ B,E). We define a new measure of multitasking capacity of such networks, based on the assumptions that tasks that need to be multitasked rely on independent resources, i.e., form a matching, and that tasks can be multitasked without interference if they form an induced matching. Our main result is an inherent tradeoff between the multitasking capacity and the average degree of the network that holds regardless of the network architecture. These results are also extended to networks of depth greater than 2. On the positive side, we demonstrate that networks that are random-like (e.g., locally sparse) can have desirable multitasking properties. Our results shed light into the parallel-processing limitations of neural systems and provide insights that may be useful for the analysis and design of parallel architectures.
UR - http://www.scopus.com/inward/record.url?scp=85046998987&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85046998987&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85046998987
SN - 1049-5258
VL - 2017-December
SP - 2101
EP - 2110
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 31st Annual Conference on Neural Information Processing Systems, NIPS 2017
Y2 - 4 December 2017 through 9 December 2017
ER -