Abstract
The success of modern deep neural networks (DNNs) is often achieved at the expense of computational cost, which prevents their deployment in resource- and time-constrained scenarios. While recently developed efficient DNNs are making real-world deployment more feasible, they do not fully exploit input properties to maximize computational efficiency. Specifically, current efficient DNNs use a one-size-fits-all approach that identically processes all inputs. Since different images require different feature embeddings to be accurately classified, we propose a fully dynamic paradigm that imparts DNNs with hierarchical inference dynamics at the level of layers and individual convolutional filters. Two compact networks, called Layer-Net (L-Net) and Channel-Net (C-Net), predict redundant layers or filters to be skipped. L-Net and C-Net also learn to scale retained computation outputs to maximize task accuracy. By integrating L-Net and C-Net into a joint design framework, called LC-Net, we consistently outperform state-of-the-art dynamic frameworks with respect to both efficiency and classification accuracy. On the CIFAR-10 dataset, LC-Net results in up to 11.9x fewer floating-point operations (FLOPs) and up to 3.3% higher accuracy compared to other dynamic inference methods. On the ImageNet dataset, LC-Net achieves up to 1.4x fewer FLOPs and up to 4.6% higher Top-1 accuracy than the other methods.
Original language | English (US) |
---|---|
Journal | IEEE Transactions on Emerging Topics in Computing |
DOIs | |
State | Accepted/In press - 2021 |
All Science Journal Classification (ASJC) codes
- Computer Science (miscellaneous)
- Information Systems
- Human-Computer Interaction
- Computer Science Applications
Keywords
- Conditional computation
- deep learning
- dynamic execution
- dynamic inference
- model compression