Constrained optimization of neural network architecture

Mung Chiang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

By introducing a well-motivated information theoretic metric and new convex optimization algorithms, the architecture of a neural network is designed to enhance its supervised learning capability. We formulate two optimization frameworks that allow efficient algorithms for a large number of variables and accommodate a variety of practical constraints on structural randomness of neural networks. Convex optimization is also used for independent component analysis (ICA) and multi-antenna fading channel communication channels.

Original languageEnglish (US)
Title of host publication2000 IEEE Asia-Pacific Conference on Circuits and Systems
Subtitle of host publicationElectronic Communication Systems
Pages356-359
Number of pages4
StatePublished - Dec 1 2000
Externally publishedYes
Event2000 IEEE Asia-Pacific Conference on Circuits and Systems: Electronic Communication Systems - Tianjin, China
Duration: Dec 4 2000Dec 6 2000

Other

Other2000 IEEE Asia-Pacific Conference on Circuits and Systems: Electronic Communication Systems
CountryChina
CityTianjin
Period12/4/0012/6/00

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Constrained optimization of neural network architecture'. Together they form a unique fingerprint.

  • Cite this

    Chiang, M. (2000). Constrained optimization of neural network architecture. In 2000 IEEE Asia-Pacific Conference on Circuits and Systems: Electronic Communication Systems (pp. 356-359)