TY - JOUR
T1 - Sanity-checking pruning methods
T2 - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
AU - Su, Jingtong
AU - Chen, Yihang
AU - Cai, Tianle
AU - Wu, Tianhao
AU - Gao, Ruiqi
AU - Wang, Liwei
AU - Lee, Jason D.
N1 - Funding Information:
We thank Jonathan Frankle, Mingjie Sun and Guodong Zhang for discussions on pruning literature. TC and RG are supported in part by the Zhongguancun Haihua Institute for Frontier Information Technology. LW was supported by National Key R&D Program of China (2018YFB1402600), Key-Area Research and Development Program of Guangdong Province (No. 2019B121204008) and Beijing Academy of Artificial Intelligence. JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0303, the Sloan Research Fellowship, and NSF CCF 2002272.
Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Network pruning is a method for reducing test-time computational resource requirements with minimal performance degradation. Conventional wisdom of pruning algorithms suggests that: (1) Pruning methods exploit information from training data to find good subnetworks; (2) The architecture of the pruned network is crucial for good performance. In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call “initial tickets”), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance. These findings inspire us to choose a series of simple data-independent prune ratios for each layer, and randomly prune each layer accordingly to get a subnetwork (which we call “random tickets”). Experimental results show that our zero-shot random tickets outperform or attain a similar performance compared to existing “initial tickets”. In addition, we identify one existing pruning method that passes our sanity checks. We hybridize the ratios in our random ticket with this method and propose a new method called “hybrid tickets”, which achieves further improvement.
AB - Network pruning is a method for reducing test-time computational resource requirements with minimal performance degradation. Conventional wisdom of pruning algorithms suggests that: (1) Pruning methods exploit information from training data to find good subnetworks; (2) The architecture of the pruned network is crucial for good performance. In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call “initial tickets”), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance. These findings inspire us to choose a series of simple data-independent prune ratios for each layer, and randomly prune each layer accordingly to get a subnetwork (which we call “random tickets”). Experimental results show that our zero-shot random tickets outperform or attain a similar performance compared to existing “initial tickets”. In addition, we identify one existing pruning method that passes our sanity checks. We hybridize the ratios in our random ticket with this method and propose a new method called “hybrid tickets”, which achieves further improvement.
UR - http://www.scopus.com/inward/record.url?scp=85108407339&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85108407339&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85108407339
SN - 1049-5258
VL - 2020-December
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 6 December 2020 through 12 December 2020
ER -