TY - JOUR
T1 - Limits of private learning with access to public data
AU - Alon, Noga
AU - Bassily, Raef
AU - Moran, Shay
N1 - Funding Information:
N. Alon’s research is supported in part by NSF grant DMS-1855464, BSF grant 2018267, and the Simons Foundation. R. Bassily’s research is supported by NSF Awards AF-1908281, SHF-1907715, Google Faculty Research Award, and OSU faculty start-up support.
PY - 2019
Y1 - 2019
N2 - We consider learning problems where the training set consists of two types of examples: private and public. The goal is to design a learning algorithm that satisfies differential privacy only with respect to the private examples. This setting interpolates between private learning (where all examples are private) and classical learning (where all examples are public). We study the limits of learning in this setting in terms of private and public sample complexities. We show that any hypothesis class of VC-dimension d can be agnostically learned up to an excess error of a using only (roughly) d/a public examples and d/a2 private labeled examples. This result holds even when the public examples are unlabeled. This gives a quadratic improvement over the standard d/a2 upper bound on the public sample complexity (where private examples can be ignored altogether if the public examples are labeled). Furthermore, we give a nearly matching lower bound, which we prove via a generic reduction from this setting to the one of private learning without public data.
AB - We consider learning problems where the training set consists of two types of examples: private and public. The goal is to design a learning algorithm that satisfies differential privacy only with respect to the private examples. This setting interpolates between private learning (where all examples are private) and classical learning (where all examples are public). We study the limits of learning in this setting in terms of private and public sample complexities. We show that any hypothesis class of VC-dimension d can be agnostically learned up to an excess error of a using only (roughly) d/a public examples and d/a2 private labeled examples. This result holds even when the public examples are unlabeled. This gives a quadratic improvement over the standard d/a2 upper bound on the public sample complexity (where private examples can be ignored altogether if the public examples are labeled). Furthermore, we give a nearly matching lower bound, which we prove via a generic reduction from this setting to the one of private learning without public data.
UR - http://www.scopus.com/inward/record.url?scp=85090176548&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85090176548&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85090176548
SN - 1049-5258
VL - 32
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019
Y2 - 8 December 2019 through 14 December 2019
ER -