TY - GEN
T1 - The anatomy of efficient FFT and winograd convolutions on modern CPUs
AU - Zlateski, Aleksandar
AU - Jia, Zhen
AU - Li, Kai
AU - Durand, Fredo
N1 - Publisher Copyright:
© 2019 ACM.
PY - 2019/6/26
Y1 - 2019/6/26
N2 - Winograd-based convolution has quickly gained traction as a preferred approach to implement convolutional neural networks (ConvNet) on various hardware platforms because it could require fewer floating point operations than FFT-based or direct convolutions. In this paper, we analyze the theoretical performances of three methods (regular FFT-, Gauss-FFT-, and Winograd-based convolutions), as well as compare their highly optimized implementations on modern multi- and many-core CPUs. With all three implementations employing the same optimizations on modern CPUs, our experimental results with modern ConvNets show that the FFT-based implementations generally outperform the Winograd-based approach, which is contrary to the popular belief. To understand the results, we use a Roofline performance model to analyze the three implementations in detail, by looking at each of their computation phases and by considering not only the number of floating point operations, but also the memory bandwidth and the cache sizes. The performance analysis explains why, and under what conditions, the FFT-based implementations outperform the Winograd-based one, on modern CPUs.
AB - Winograd-based convolution has quickly gained traction as a preferred approach to implement convolutional neural networks (ConvNet) on various hardware platforms because it could require fewer floating point operations than FFT-based or direct convolutions. In this paper, we analyze the theoretical performances of three methods (regular FFT-, Gauss-FFT-, and Winograd-based convolutions), as well as compare their highly optimized implementations on modern multi- and many-core CPUs. With all three implementations employing the same optimizations on modern CPUs, our experimental results with modern ConvNets show that the FFT-based implementations generally outperform the Winograd-based approach, which is contrary to the popular belief. To understand the results, we use a Roofline performance model to analyze the three implementations in detail, by looking at each of their computation phases and by considering not only the number of floating point operations, but also the memory bandwidth and the cache sizes. The performance analysis explains why, and under what conditions, the FFT-based implementations outperform the Winograd-based one, on modern CPUs.
UR - http://www.scopus.com/inward/record.url?scp=85074509429&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074509429&partnerID=8YFLogxK
U2 - 10.1145/3330345.3330382
DO - 10.1145/3330345.3330382
M3 - Conference contribution
AN - SCOPUS:85074509429
T3 - Proceedings of the International Conference on Supercomputing
SP - 414
EP - 424
BT - ICS 2019 - International Conference on Supercomputing
PB - Association for Computing Machinery
T2 - 33rd ACM International Conference on Supercomputing, ICS 2019, held in conjunction with the Federated Computing Research Conference, FCRC 2019
Y2 - 26 June 2019
ER -