Abstract
Artificial Neural Networks (ANNs) can be used for grey-box or black-box modeling of continuous-time systems by placing them in a framework based on numerical integration techniques. When an implicit integration scheme is used as a template, it imposes a recurrent structure on the overall network. Here we present three algorithms suitable for the training of such "network-plus-integrator" assemblies and compare their relative computational efficiencies. Pineda's Recurrent Back-Propagation (RBP) training method is recast to exploit the structure of the assembly. The second approach is RBP modified to evaluate partial derivatives of network outputs with respect to parameters exactly, while the third is a Newton-Raphson based algorithm in which outputs of the network and partial derivatives are computed at each step instead of approximated. We compare the methods via an illustrative example and discuss aspects of training in a parallel computing environment.
Original language | English (US) |
---|---|
Pages (from-to) | S751-S756 |
Journal | Computers and Chemical Engineering |
Volume | 20 |
Issue number | SUPPL.1 |
DOIs | |
State | Published - 1996 |
All Science Journal Classification (ASJC) codes
- General Chemical Engineering
- Computer Science Applications