Memory-sample lower bounds for learning parity with noise

Sumegha Garg, Pravesh K. Kothari, Pengda Liu, Ran Raz

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn x = (x1,..., xn) ∈ {0, 1}n from a stream of random linear equations over F2 that are correct with probability 12 + ε and flipped with probability 12 − ε (0 < ε < 12 ), that any learning algorithm requires either a memory of size Ω(n2/ε) or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [8], when the samples are noisy. A matrix M : A × X → {−1, 1} corresponds to the following learning problem with error parameter ε: an unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1), (a2, b2)..., where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai, x) with probability 1/2 + ε and bi = −M(ai, x) with probability 1/2 − ε (0 < ε < 12 ). Assume that k, ℓ, r are such that any submatrix of M of at least 2−k · |A| rows and at least 2−ℓ · |X| columns, has a bias of at most 2−r. We show that any learning algorithm for the learning problem corresponding to M, with error parameter ε, requires either a memory of size at least Ω (kε·ℓ), or at least 2Ω(r) samples. The result holds even if the learner has an exponentially small success probability (of 2Ω(r)). In particular, this shows that for a large class of learning problems, same as those in [8], any learning algorithm requires either a memory of size at least Ω ( (log |X|)·(log |A|) ) or an exponential number of noisy samples. ε Our proof is based on adapting the arguments in [21, 8] to the noisy case.

Original languageEnglish (US)
Title of host publicationApproximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2021
EditorsMary Wootters, Laura Sanita
PublisherSchloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
ISBN (Electronic)9783959772075
DOIs
StatePublished - Sep 1 2021
Externally publishedYes
Event24th International Conference on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2021 and 25th International Conference on Randomization and Computation, RANDOM 2021 - Virtual, Seattle, United States
Duration: Aug 16 2021Aug 18 2021

Publication series

NameLeibniz International Proceedings in Informatics, LIPIcs
Volume207
ISSN (Print)1868-8969

Conference

Conference24th International Conference on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2021 and 25th International Conference on Randomization and Computation, RANDOM 2021
Country/TerritoryUnited States
CityVirtual, Seattle
Period8/16/218/18/21

All Science Journal Classification (ASJC) codes

  • Software

Keywords

  • Branching program
  • Learning parity under noise
  • Memory-sample tradeoffs
  • Space lower bound

Fingerprint

Dive into the research topics of 'Memory-sample lower bounds for learning parity with noise'. Together they form a unique fingerprint.

Cite this