Topological limits to the parallel processing capability of network architectures

Giovanni Petri, Sebastian Musslick, Biswadip Dey, Kayhan Özcimder, David Turner, Nesreen K. Ahmed, Theodore L. Willke, Jonathan D. Cohen

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

The ability to learn new tasks and generalize to others is a remarkable characteristic of both human brains and recent artificial intelligence systems. The ability to perform multiple tasks simultaneously is also a key characteristic of parallel architectures, as is evident in the human brain and exploited in traditional parallel architectures. Here we show that these two characteristics reflect a fundamental tradeoff between interactive parallelism, which supports learning and generalization, and independent parallelism, which supports processing efficiency through concurrent multitasking. Although the maximum number of possible parallel tasks grows linearly with network size, under realistic scenarios their expected number grows sublinearly. Hence, even modest reliance on shared representations, which support learning and generalization, constrains the number of parallel tasks. This has profound consequences for understanding the human brain’s mix of sequential and parallel capabilities, as well as for the development of artificial intelligence systems that can optimally manage the tradeoff between learning and processing efficiency.

Original languageEnglish (US)
Pages (from-to)646-651
Number of pages6
JournalNature Physics
Volume17
Issue number5
DOIs
StatePublished - May 2021

All Science Journal Classification (ASJC) codes

  • General Physics and Astronomy

Fingerprint

Dive into the research topics of 'Topological limits to the parallel processing capability of network architectures'. Together they form a unique fingerprint.

Cite this