Abstract
Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.
| Original language | English (US) |
|---|---|
| State | Published - 2017 |
| Externally published | Yes |
| Event | 5th International Conference on Learning Representations, ICLR 2017 - Toulon, France Duration: Apr 24 2017 → Apr 26 2017 |
Conference
| Conference | 5th International Conference on Learning Representations, ICLR 2017 |
|---|---|
| Country/Territory | France |
| City | Toulon |
| Period | 4/24/17 → 4/26/17 |
All Science Journal Classification (ASJC) codes
- Education
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics