## Abstract

In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn x = (x1,..., xn) ∈ {0, 1}^{n} from a stream of random linear equations over F2 that are correct with probability ^{1}_{2} + ε and flipped with probability ^{1}_{2} − ε (0 < ε < ^{1}_{2} ), that any learning algorithm requires either a memory of size Ω(n^{2}/ε) or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [8], when the samples are noisy. A matrix M : A × X → {−1, 1} corresponds to the following learning problem with error parameter ε: an unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1), (a2, b2)..., where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai, x) with probability 1/2 + ε and bi = −M(ai, x) with probability 1/2 − ε (0 < ε < ^{1}_{2} ). Assume that k, ℓ, r are such that any submatrix of M of at least 2^{−k} · |A| rows and at least 2^{−ℓ} · |X| columns, has a bias of at most 2^{−r}. We show that any learning algorithm for the learning problem corresponding to M, with error parameter ε, requires either a memory of size at least Ω (^{k}_{ε}^{·ℓ)}, or at least 2^{Ω(}r^{)} samples. The result holds even if the learner has an exponentially small success probability (of 2^{−}Ω(r^{)}). In particular, this shows that for a large class of learning problems, same as those in [8], any learning algorithm requires either a memory of size at least Ω ( ^{(log} |X|)·(log |A|^{)} ) or an exponential number of noisy samples. ε Our proof is based on adapting the arguments in [21, 8] to the noisy case.

Original language | English (US) |
---|---|

Title of host publication | Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2021 |

Editors | Mary Wootters, Laura Sanita |

Publisher | Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing |

ISBN (Electronic) | 9783959772075 |

DOIs | |

State | Published - Sep 1 2021 |

Externally published | Yes |

Event | 24th International Conference on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2021 and 25th International Conference on Randomization and Computation, RANDOM 2021 - Virtual, Seattle, United States Duration: Aug 16 2021 → Aug 18 2021 |

### Publication series

Name | Leibniz International Proceedings in Informatics, LIPIcs |
---|---|

Volume | 207 |

ISSN (Print) | 1868-8969 |

### Conference

Conference | 24th International Conference on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2021 and 25th International Conference on Randomization and Computation, RANDOM 2021 |
---|---|

Country/Territory | United States |

City | Virtual, Seattle |

Period | 8/16/21 → 8/18/21 |

## All Science Journal Classification (ASJC) codes

- Software

## Keywords

- Branching program
- Learning parity under noise
- Memory-sample tradeoffs
- Space lower bound