TY - JOUR
T1 - Near Optimal Distributed Learning of Halfspaces with Two Parties
AU - Braverman, Mark
AU - Kol, Gillat
AU - Moran, Shay
AU - Saxena, Raghuvansh R.
N1 - Funding Information:
Shay Moran is a Robert J. Shillman Fellow and is supported by the ISF, grant no. 1225/20, by an Azrieli Faculty Fellowship, and by BSF grant 2018385.
Funding Information:
We thank Bernard Chazelle and an anonymous reviewer of a previous version of this manuscript for pointing out the connection between containers and hyperplane cuttings. We thank Noga Alon, Sepehr Assadi, Steve Hanneke, Shachar Lovett and Nikita Zhivotovskii for providing insightful comments. Shay Moran is a Robert J. Shillman Fellow and is supported by the ISF, grant no. 1225/20, by an Azrieli Faculty Fellowship, and by BSF grant 2018385.
Publisher Copyright:
© 2021 M. Braverman, G. Kol, S. Moran & R.R. Saxena.
PY - 2021
Y1 - 2021
N2 - Distributed learning protocols are designed to train on distributed data without gathering it all on a single centralized machine, thus contributing to the efficiency of the system and enhancing its privacy. We study a central problem in distributed learning, called distributed learning of halfspaces: let U ⊆ Rd be a known domain of size n and let h : Rd → R be an unknown target affine function.1 A set of examples {(u, b)} is distributed between several parties, where u ∈ U is a point and b = sign(h(u)) ∈ {±1} is its label. The parties’ goal is to agree on a classifier f : U → {±1} such that f(u) = b for every input example (u, b). We design a protocol for the distributed halfspace learning problem in the two-party setting, communicating only Õ(dlog n) bits. To this end, we introduce a new tool called halfspace containers, that is closely related to bracketing numbers in statistics and to hyperplane cuttings in discrete geometry, and allows for a compressed approximate representation of every halfspace. We complement our upper bound result by an almost matching Ω̃(dlog n) lower bound on the communication complexity of any such protocol. Since the distributed halfspace learning problem is closely related to the convex set disjointness problem in communication complexity and the problem of distributed linear programming in distributed optimization, we also derive upper and lower bounds of Õ(d2 log n) and Ω̃(dlog n) on the communication complexity of both of these basic problems.
AB - Distributed learning protocols are designed to train on distributed data without gathering it all on a single centralized machine, thus contributing to the efficiency of the system and enhancing its privacy. We study a central problem in distributed learning, called distributed learning of halfspaces: let U ⊆ Rd be a known domain of size n and let h : Rd → R be an unknown target affine function.1 A set of examples {(u, b)} is distributed between several parties, where u ∈ U is a point and b = sign(h(u)) ∈ {±1} is its label. The parties’ goal is to agree on a classifier f : U → {±1} such that f(u) = b for every input example (u, b). We design a protocol for the distributed halfspace learning problem in the two-party setting, communicating only Õ(dlog n) bits. To this end, we introduce a new tool called halfspace containers, that is closely related to bracketing numbers in statistics and to hyperplane cuttings in discrete geometry, and allows for a compressed approximate representation of every halfspace. We complement our upper bound result by an almost matching Ω̃(dlog n) lower bound on the communication complexity of any such protocol. Since the distributed halfspace learning problem is closely related to the convex set disjointness problem in communication complexity and the problem of distributed linear programming in distributed optimization, we also derive upper and lower bounds of Õ(d2 log n) and Ω̃(dlog n) on the communication complexity of both of these basic problems.
KW - Communication Complexity
KW - Distributed Learning
UR - http://www.scopus.com/inward/record.url?scp=85124010511&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124010511&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85124010511
SN - 2640-3498
VL - 134
SP - 724
EP - 758
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 34th Conference on Learning Theory, COLT 2021
Y2 - 15 August 2021 through 19 August 2021
ER -