TY - JOUR
T1 - Photonic Multiply-Accumulate Operations for Neural Networks
AU - Nahmias, Mitchell A.
AU - De Lima, Thomas Ferreira
AU - Tait, Alexander N.
AU - Peng, Hsuan Tung
AU - Shastri, Bhavin J.
AU - Prucnal, Paul R.
N1 - Funding Information:
Manuscript received April 17, 2019; revised September 6, 2019; accepted September 7, 2019. Date of publication September 18, 2019; date of current version December 20, 2019. This work was supported by the National Science Foundation (NSF) (ECCS 1247298, DGE 1148900). (Corresponding author: Mitchell A. Nahmias.) M. A. Nahmias, T. F. de Lima, A. N. Tait, H.-T. Peng, and P. R. Prucnal are with the Department of Electrical Engineering, Princeton University, Princeton, NJ 08544 USA (e-mail: mnahmias@princeton.edu; tlima@princeton.edu; atait@ieee.org; hpeng@princeton.edu; prucnal@princeton.edu).
Publisher Copyright:
© 1995-2012 IEEE.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems. Photonic processors are shown to have advantages in the limit of large processor sizes (>100μm), large vector sizes (N > 500), and low noise precision (≤4 bits). We discuss several proposed tunable photonic MAC systems, and provide a concrete comparison between deep learning and photonic hardware using several empirically-validated device and system models. We show significant potential improvements over digital electronics in energy (>102), speed (>103), and compute density (>102).
AB - It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems. Photonic processors are shown to have advantages in the limit of large processor sizes (>100μm), large vector sizes (N > 500), and low noise precision (≤4 bits). We discuss several proposed tunable photonic MAC systems, and provide a concrete comparison between deep learning and photonic hardware using several empirically-validated device and system models. We show significant potential improvements over digital electronics in energy (>102), speed (>103), and compute density (>102).
KW - Artificial intelligence
KW - analog computers
KW - analog processing circuits
KW - neural networks
KW - optical computing
UR - http://www.scopus.com/inward/record.url?scp=85077238491&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077238491&partnerID=8YFLogxK
U2 - 10.1109/JSTQE.2019.2941485
DO - 10.1109/JSTQE.2019.2941485
M3 - Article
AN - SCOPUS:85077238491
VL - 26
JO - IEEE Journal on Selected Topics in Quantum Electronics
JF - IEEE Journal on Selected Topics in Quantum Electronics
SN - 1077-260X
IS - 1
M1 - 8844098
ER -