TY - JOUR
T1 - Ring systolic designs for artificial neural nets
AU - Kung, S. Y.
AU - Hwang, J. N.
N1 - Funding Information:
*This research was supported in put by the Na~iomd Science Foundstion under Grant MIP'87-146Sg, sad by the Innovative Science ud Technology Olice of tke Stratqic Dofmm¢ Initiati~ O~gaaisation, a d ~ ~ the Office of Naval Research under Contract No. N00014-SS-K-@4~9 ud N00O14-8$-K,0S99.
Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 1988
Y1 - 1988
N2 - This paper advocates digital VLSI architectures for implementing a wide variety of artificial neural nets (ANNs). A programmable systolic array is proposed which maximizes the strength of VLSI in terms of intensive and pipelined computing and yet circumvents the limitation on communication. The array is meant to be more general purpose than most other ANN architectures proposed. It may be used for a variety of algorithms in both the search and learning phases of ANNs. Although design considerations for the learning phase is somewhat more involved, our design can accommodate key learning rules, such as Hebbian, delta, competitive, and back-propagation learning rules. A numerical algebraic analysis permits much improved learning rates as compared with the existing techniques. Compared to analog neural circuits, the proposed systolic architecture offers higher flexibilities, higher precision, and full pipelinability.
AB - This paper advocates digital VLSI architectures for implementing a wide variety of artificial neural nets (ANNs). A programmable systolic array is proposed which maximizes the strength of VLSI in terms of intensive and pipelined computing and yet circumvents the limitation on communication. The array is meant to be more general purpose than most other ANN architectures proposed. It may be used for a variety of algorithms in both the search and learning phases of ANNs. Although design considerations for the learning phase is somewhat more involved, our design can accommodate key learning rules, such as Hebbian, delta, competitive, and back-propagation learning rules. A numerical algebraic analysis permits much improved learning rates as compared with the existing techniques. Compared to analog neural circuits, the proposed systolic architecture offers higher flexibilities, higher precision, and full pipelinability.
UR - http://www.scopus.com/inward/record.url?scp=0024168396&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0024168396&partnerID=8YFLogxK
U2 - 10.1016/0893-6080(88)90416-9
DO - 10.1016/0893-6080(88)90416-9
M3 - Conference article
AN - SCOPUS:0024168396
SN - 0893-6080
VL - 1
SP - 390
JO - Neural Networks
JF - Neural Networks
IS - 1 SUPPL
T2 - International Neural Network Society 1988 First Annual Meeting
Y2 - 6 September 1988 through 10 September 1988
ER -