### Abstract

This paper is developed in two parts. First, we formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. Our treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and Principal Component Analysis (PCA) as special cases. Our analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the Generalized Singular Value Decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, we investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) we are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it’s easier to update the model structure by adding one more unit or pruning one or more units when our application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper and lower-layer weights. We shall call this the Lateral Orthogonalization Network (LON) and we’ll show via theoretical analysis—and verify via simulation—that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, we show the application of our results to the solution of the identification problem of systems whose excitation has non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.

Original language | English (US) |
---|---|

Pages (from-to) | 684-697 |

Number of pages | 14 |

Journal | IEEE Transactions on Neural Networks |

Volume | 5 |

Issue number | 5 |

DOIs | |

State | Published - Sep 1994 |

### All Science Journal Classification (ASJC) codes

- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence

## Fingerprint Dive into the research topics of 'Multilayer Neural Networks for Reduced-Rank Approximation'. Together they form a unique fingerprint.

## Cite this

*IEEE Transactions on Neural Networks*,

*5*(5), 684-697. https://doi.org/10.1109/72.317721