Constrained optimization of neural network architecture

Mung Chiang

Research output: Contribution to conferencePaperpeer-review

Abstract

By introducing a well-motivated information theoretic metric and new convex optimization algorithms, the architecture of a neural network is designed to enhance its supervised learning capability. We formulate two optimization frameworks that allow efficient algorithms for a large number of variables and accommodate a variety of practical constraints on structural randomness of neural networks. Convex optimization is also used for independent component analysis (ICA) and multi-antenna fading channel communication channels.

Original languageEnglish (US)
Pages356-359
Number of pages4
StatePublished - 2000
Event2000 IEEE Asia-Pacific Conference on Circuits and Systems: Electronic Communication Systems - Tianjin, China
Duration: Dec 4 2000Dec 6 2000

Other

Other2000 IEEE Asia-Pacific Conference on Circuits and Systems: Electronic Communication Systems
Country/TerritoryChina
CityTianjin
Period12/4/0012/6/00

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Constrained optimization of neural network architecture'. Together they form a unique fingerprint.

Cite this