PZnet: Efficient 3D ConvNet Inference on Manycore CPUs

Sergiy Popovych, Davit Buniatyan, Aleksandar Zlateski, Kai Li, Hyunjune Sebastian Seung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations


Convolutional nets have been shown to achieve state-of-the-art accuracy in many biomedical image analysis tasks. Many tasks within biomedical analysis domain involve analyzing volumetric (3D) data acquired by CT, MRI and Microscopy acquisition methods. To deploy convolutional nets in practical working systems, it is important to solve the efficient inference problem. Namely, one should be able to apply an already-trained convolutional network to many large images using limited computational resources. In this paper we present PZnet, a CPU-only engine that can be used to perform inference for a variety of 3D convolutional net architectures. PZNet outperforms MKL-based CPU implementations of PyTorch and Tensorflow by more than 3.5x for the popular U-net architecture. Moreover, for 3D convolutions with low featuremap numbers, cloud CPU inference with PZnet outperforms cloud GPU inference in terms of cost efficiency.

Original languageEnglish (US)
Title of host publicationAdvances in Computer Vision - Proceedings of the 2019 Computer Vision Conference CVC
EditorsSupriya Kapoor, Kohei Arai
PublisherSpringer Verlag
Number of pages15
ISBN (Print)9783030177942
StatePublished - 2020
EventComputer Vision Conference, CVC 2019 - Las Vegas, United States
Duration: Apr 25 2019Apr 26 2019

Publication series

NameAdvances in Intelligent Systems and Computing
ISSN (Print)2194-5357
ISSN (Electronic)2194-5365


ConferenceComputer Vision Conference, CVC 2019
Country/TerritoryUnited States
CityLas Vegas

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • General Computer Science


  • 3D convolutions
  • Image segmentation
  • Intel Xeon
  • SIMD


Dive into the research topics of 'PZnet: Efficient 3D ConvNet Inference on Manycore CPUs'. Together they form a unique fingerprint.

Cite this