Parallel architectures for artificial neural nets

S. Y. Kung, J. N. Hwang

Research output: Contribution to conferencePaper

60 Scopus citations

Abstract

The authors advocate digital VLSI architectures for implementing a wide variety of artificial neural nets (ANNs). A programmable systolic array is proposed, which maximizes the strength of VLSI in terms of intensive and pipelined computing, yet circumvents its limitation on communication. The array is meant to be more general-purpose than most other ANN architectures proposed. It may be used for a variety of algorithms in both the search and learning phases of ANNs, e.g., single-layer recurrent nets (e.g., Hopfield nets) and multilayer feed-forward nets (e.g., perceptronlike nets). Although design considerations for the learning phase are somewhat more involved, the proposed design can accommodate very well several key learning rules, such as Hebbian, delta, competitive, and back-propagation learning rules. Compared to analog neural circuits, the proposed systolic architecture offers higher flexibilities, higher precision, and full pipelineability.

Original languageEnglish (US)
Pages165-172
Number of pages8
DOIs
StatePublished - 1988

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Fingerprint Dive into the research topics of 'Parallel architectures for artificial neural nets'. Together they form a unique fingerprint.

  • Cite this