Differentially Private Image Classification by Learning Priors from Random Processes

Xinyu Tang, Ashwinee Panda, Vikash Sehwag, Prateek Mittal

Research output: Contribution to journalConference articlepeer-review

Abstract

In privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) performs worse than SGD due to per-sample gradient clipping and noise addition. A recent focus in private learning research is improving the performance of DP-SGD on private data by incorporating priors that are learned on real-world public data. In this work, we explore how we can improve the privacy-utility tradeoff of DP-SGD by learning priors from images generated by random processes and transferring these priors to private data. We propose DP-RandP, a three-phase approach. We attain new state-of-the-art accuracy when training from scratch on CIFAR10, CIFAR100, MedMNIST and ImageNet for a range of privacy budgets ε ∈ [1, 8]. In particular, we improve the previous best reported accuracy on CIFAR10 from 60.6% to 72.3% for ε = 1. Our code is available at https://github.com/inspire-group/DP-RandP.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume36
StatePublished - 2023
Event37th Conference on Neural Information Processing Systems, NeurIPS 2023 - New Orleans, United States
Duration: Dec 10 2023Dec 16 2023

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Differentially Private Image Classification by Learning Priors from Random Processes'. Together they form a unique fingerprint.

Cite this