TY - JOUR
T1 - Smooth function approximation using neural networks
AU - Ferrari, Silvia
AU - Stengel, Robert F.
N1 - Funding Information:
Manuscript received August 6, 2001; revised October 15, 2003. This work was supported by the Federal Aviation Administration and the National Aeronautics and Space Administration under FAA Grant 95-G-0011. S. Ferrari is with the Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708 USA (e-mail: [email protected]). R. F. Stengel is with the Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544 USA. Digital Object Identifier 10.1109/TNN.2004.836233
PY - 2005/1
Y1 - 2005/1
N2 - An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
AB - An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
KW - Algebraic
KW - Function approximation
KW - Gradient
KW - Input-output
KW - Training
UR - http://www.scopus.com/inward/record.url?scp=13844255524&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=13844255524&partnerID=8YFLogxK
U2 - 10.1109/TNN.2004.836233
DO - 10.1109/TNN.2004.836233
M3 - Article
C2 - 15732387
AN - SCOPUS:13844255524
SN - 1045-9227
VL - 16
SP - 24
EP - 38
JO - IEEE Transactions on Neural Networks
JF - IEEE Transactions on Neural Networks
IS - 1
ER -