This paper advocates a systolic neural network architecture for implementing the hidden Markov models (HMM's). A programmable systolic array is proposed, which maximizes the strength of VLSI in terms of intensive and pipelined computing and yet circumvents the limitation on communication. A unified algorithmic formulation for recurrent BackPropagation (RBP) network and HMM's is exploited for the architectural design, which results in a basic structure of a universal simulation tool for these connectionist networks. These networks accomplish the information storage/retrieval process by altering the pattern of connecting among a large number of primitive units, and/or by modifying certain weights associated with each connection. Important concerns regarding partitioning for large networks, fault-tolerance for ring array architecture, scaling for avoiding underflow, and architecture for locally interconnected networks are also discussed. Finally, the implementations based on commercially available VLSI chips (e.g., Inmos T800) and custom VLSI technology are discussed.
All Science Journal Classification (ASJC) codes
- Signal Processing