Optimizing N-dimensional, winograd-based convolution for manycore CPUs

Zhen Jia, Aleksandar Zlateski, Fredo Durand, Kai Li

Research output: Chapter in Book/Report/Conference proceedingChapter

12 Scopus citations


Recent work on Winograd-based convolution allows for a great reduction of computational complexity, but existing implementations are limited to 2D data and a single kernel size of 3 by 3. They can achieve only slightly better, and often worse performance than better optimized, direct convolution implementations. We propose and implement an algorithm for N-dimensional Winograd-based convolution that allows arbitrary kernel sizes and is optimized for manycore CPUs. Our algorithm achieves high hardware utilization through a series of optimizations. Our experiments show that on modern ConvNets, our optimized implementation, is on average more than 3 x, and sometimes 8 x faster than other state-of-the-art CPU implementations on an Intel Xeon Phi manycore processors. Moreover, our implementation on the Xeon Phi achieves competitive performance for 2D ConvNets and superior performance for 3D ConvNets, compared with the best GPU implementations.

Original languageEnglish (US)
Title of host publicationACM SIGPLAN Notices
PublisherAssociation for Computing Machinery
Number of pages15
ISBN (Electronic)9781450349116
StatePublished - Feb 10 2018

All Science Journal Classification (ASJC) codes

  • General Computer Science


  • convolution
  • parallelization
  • vectorization
  • winograd


Dive into the research topics of 'Optimizing N-dimensional, winograd-based convolution for manycore CPUs'. Together they form a unique fingerprint.

Cite this