TY - GEN
T1 - A discriminant information approach to deep neural network pruning
AU - Hou, Zejiang
AU - Kung, Sun Yuan
N1 - Publisher Copyright:
© 2020 IEEE
PY - 2020
Y1 - 2020
N2 - Network pruning has become the de facto tool to accelerate deep neural networks for mobile and edge applications. Recently, feature-map discriminant based channel pruning has shown promising results, as it aligns well with the CNN's objective of differentiating multiple classes and offers better interpretability of the pruning decision. However, existing discriminant-based methods are challenged by computation inefficiency, as there is a lack of theoretical guidance on quantifying the feature-map discriminant power. In this paper, we develop a mathematical formulation to accurately and efficiently quantify the feature-map discriminativeness, which gives rise to a novel criterion, Discriminant Information (DI). We analyze the theoretical property of DI, specifically the non-decreasing property, that makes DI a valid channel selection criterion. By measuring the differential discriminant, we can identify and remove those channels with minimum influence to the discriminant power. The versatility of DI criterion also enables an intra-layer mixed precision quantization to further compress the network. Moreover, we propose a DI-based greedy pruning algorithm and structure distillation technique to automatically decide the pruned structure that satisfies certain resource budget, which is a common requirement in reality. Extensive experiments demonstrate the effectiveness of our method: our pruned ResNet50 on ImageNet achieves 44% FLOPs reduction without any Top-1 accuracy loss compared to unpruned model.
AB - Network pruning has become the de facto tool to accelerate deep neural networks for mobile and edge applications. Recently, feature-map discriminant based channel pruning has shown promising results, as it aligns well with the CNN's objective of differentiating multiple classes and offers better interpretability of the pruning decision. However, existing discriminant-based methods are challenged by computation inefficiency, as there is a lack of theoretical guidance on quantifying the feature-map discriminant power. In this paper, we develop a mathematical formulation to accurately and efficiently quantify the feature-map discriminativeness, which gives rise to a novel criterion, Discriminant Information (DI). We analyze the theoretical property of DI, specifically the non-decreasing property, that makes DI a valid channel selection criterion. By measuring the differential discriminant, we can identify and remove those channels with minimum influence to the discriminant power. The versatility of DI criterion also enables an intra-layer mixed precision quantization to further compress the network. Moreover, we propose a DI-based greedy pruning algorithm and structure distillation technique to automatically decide the pruned structure that satisfies certain resource budget, which is a common requirement in reality. Extensive experiments demonstrate the effectiveness of our method: our pruned ResNet50 on ImageNet achieves 44% FLOPs reduction without any Top-1 accuracy loss compared to unpruned model.
UR - http://www.scopus.com/inward/record.url?scp=85110543882&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85110543882&partnerID=8YFLogxK
U2 - 10.1109/ICPR48806.2021.9412693
DO - 10.1109/ICPR48806.2021.9412693
M3 - Conference contribution
AN - SCOPUS:85110543882
T3 - Proceedings - International Conference on Pattern Recognition
SP - 9553
EP - 9560
BT - Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 25th International Conference on Pattern Recognition, ICPR 2020
Y2 - 10 January 2021 through 15 January 2021
ER -