### Abstract

We provide new theoretical insights on why over- parametrization is effective in learning neural networks. For a k hidden node shallow network with quadratic activation and n training data points, we show as long as k > y/2n, over-parametrization enables local search algorithms to find a globally optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Radcmacher complexity, wc show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when k > y/2n, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.i.

Original language | English (US) |
---|---|

Title of host publication | 35th International Conference on Machine Learning, ICML 2018 |

Editors | Andreas Krause, Jennifer Dy |

Publisher | International Machine Learning Society (IMLS) |

Pages | 2132-2141 |

Number of pages | 10 |

ISBN (Electronic) | 9781510867963 |

State | Published - Jan 1 2018 |

Externally published | Yes |

Event | 35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden Duration: Jul 10 2018 → Jul 15 2018 |

### Publication series

Name | 35th International Conference on Machine Learning, ICML 2018 |
---|---|

Volume | 3 |

### Other

Other | 35th International Conference on Machine Learning, ICML 2018 |
---|---|

Country | Sweden |

City | Stockholm |

Period | 7/10/18 → 7/15/18 |

### All Science Journal Classification (ASJC) codes

- Computational Theory and Mathematics
- Human-Computer Interaction
- Software

## Fingerprint Dive into the research topics of 'On the Power of Over-parametrization in Neural Networks with Quadratic Activation'. Together they form a unique fingerprint.

## Cite this

*35th International Conference on Machine Learning, ICML 2018*(pp. 2132-2141). (35th International Conference on Machine Learning, ICML 2018; Vol. 3). International Machine Learning Society (IMLS).