Photonic Multiply-Accumulate Operations for Neural Networks

Mitchell A. Nahmias, Thomas Ferreira De Lima, Alexander N. Tait, Hsuan Tung Peng, Bhavin J. Shastri, Paul R. Prucnal

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems. Photonic processors are shown to have advantages in the limit of large processor sizes (>100μm), large vector sizes (N > 500), and low noise precision (≤4 bits). We discuss several proposed tunable photonic MAC systems, and provide a concrete comparison between deep learning and photonic hardware using several empirically-validated device and system models. We show significant potential improvements over digital electronics in energy (>102), speed (>103), and compute density (>102).

Original languageEnglish (US)
Article number8844098
JournalIEEE Journal of Selected Topics in Quantum Electronics
Volume26
Issue number1
DOIs
StatePublished - Jan 1 2020

All Science Journal Classification (ASJC) codes

  • Atomic and Molecular Physics, and Optics
  • Electrical and Electronic Engineering

Keywords

  • Artificial intelligence
  • analog computers
  • analog processing circuits
  • neural networks
  • optical computing

Fingerprint Dive into the research topics of 'Photonic Multiply-Accumulate Operations for Neural Networks'. Together they form a unique fingerprint.

Cite this