TY - JOUR
T1 - Spatially varying nanophotonic neural networks
AU - Wei, Kaixuan
AU - Li, Xiao
AU - Froech, Johannes
AU - Chakravarthula, Praneeth
AU - Whitehead, James
AU - Tseng, Ethan
AU - Majumdar, Arka
AU - Heide, Felix
N1 - Publisher Copyright:
Copyright © 2024 The Authors, some rights reserved.
PY - 2024/11/8
Y1 - 2024/11/8
N2 - The explosive growth in computation and energy cost of artificial intelligence has spurred interest in alternative computing modalities to conventional electronic processors. Photonic processors, which use photons instead of electrons, promise optical neural networks with ultralow latency and power consumption. However, existing optical neural networks, limited by their designs, have not achieved the recognition accuracy of modern electronic neural networks. In this work, we bridge this gap by embedding parallelized optical computation into flat camera optics that perform neural network computations during capture, before recording on the sensor. We leverage large kernels and propose a spatially varying convolutional network learned through a low-dimensional reparameterization. We instantiate this network inside the camera lens with a nanophotonic array with angle-dependent responses. Combined with a lightweight electronic back-end of about 2K parameters, our reconfigurable nanophotonic neural network achieves 72.76% accuracy on CIFAR-10, surpassing AlexNet (72.64%), and advancing optical neural networks into the deep learning era.
AB - The explosive growth in computation and energy cost of artificial intelligence has spurred interest in alternative computing modalities to conventional electronic processors. Photonic processors, which use photons instead of electrons, promise optical neural networks with ultralow latency and power consumption. However, existing optical neural networks, limited by their designs, have not achieved the recognition accuracy of modern electronic neural networks. In this work, we bridge this gap by embedding parallelized optical computation into flat camera optics that perform neural network computations during capture, before recording on the sensor. We leverage large kernels and propose a spatially varying convolutional network learned through a low-dimensional reparameterization. We instantiate this network inside the camera lens with a nanophotonic array with angle-dependent responses. Combined with a lightweight electronic back-end of about 2K parameters, our reconfigurable nanophotonic neural network achieves 72.76% accuracy on CIFAR-10, surpassing AlexNet (72.64%), and advancing optical neural networks into the deep learning era.
UR - https://www.scopus.com/pages/publications/85209398258
UR - https://www.scopus.com/inward/citedby.url?scp=85209398258&partnerID=8YFLogxK
U2 - 10.1126/sciadv.adp0391
DO - 10.1126/sciadv.adp0391
M3 - Article
C2 - 39514662
AN - SCOPUS:85209398258
SN - 2375-2548
VL - 10
JO - Science Advances
JF - Science Advances
IS - 45
M1 - eadp0391
ER -