Memory-Sample Lower Bounds for Learning with Classical-Quantum Hybrid Memory

Qipeng Liu, Ran Raz, Wei Zhan

Research output: Chapter in Book/Report/Conference proceedingConference contribution


In a work by Raz (J. ACM and FOCS 16), it was proved that any algorithm for parity learning on n bits requires either ω(n2) bits of classical memory or an exponential number (inn) of random samples. A line of recent works continued that research direction and showed that for a large collection of classical learning tasks, either super-linear classical memory size or super-polynomially many samples are needed. All these works consider learning algorithms as classical branching programs, which perform classical computation within bounded memory. However, these results do not capture all physical computational models, remarkably, quantum computers and the use of quantum memory. It leaves the possibility that a small piece of quantum memory could significantly reduce the need for classical memory or samples and thus completely change the nature of the classical learning task. Despite the recent research on the necessity of quantum memory for intrinsic quantum learning problems like shadow tomography and purity testing, the role of quantum memory in classical learning tasks remains obscure. In this work, we study classical learning tasks in the presence of quantum memory. We prove that any quantum algorithm with both, classical memory and quantum memory, for parity learning on n bits, requires either ω(n2) bits of classical memory or ω(n) bits of quantum memory or an exponential number of samples. In other words, the memory-sample lower bound for parity learning remains qualitatively the same, even if the learning algorithm can use, in addition to the classical memory, a quantum memory of size c n (for some constant c>0). Our result is more general and applies to many other classical learning tasks. Following previous works, we represent by the matrix M: A × X → {-1,1} the following learning task. An unknown x is sampled uniformly at random from a concept class X, and a learning algorithm tries to uncover x by seeing streaming of random samples (ai, bi = M(ai, x)) where for every i, ai A is chosen uniformly at random. Assume that k,,r are integers such that any submatrix of M of at least 2-k·|A| rows and at least 2-·|X| columns, has a bias of at most 2-r. We prove that any algorithm with classical and quantum hybrid memory for the learning problem corresponding to M needs either (1) ω(k · ) bits of classical memory, or (2) ω(r) qubits of quantum memory, or (3) 2ω(r) random samples, to achieve a success probability at least 2-O(r). Our results refute the possibility that a small amount of quantum memory significantly reduces the size of classical memory needed for efficient learning on these problems. Our results also imply improved security of several existing cryptographical protocols in the bounded-storage model (protocols that are based on parity learning on n bits), proving that security holds even in the presence of a quantum adversary with at most c n2 bits of classical memory and c n bits of quantum memory (for some constant c>0).

Original languageEnglish (US)
Title of host publicationSTOC 2023 - Proceedings of the 55th Annual ACM Symposium on Theory of Computing
EditorsBarna Saha, Rocco A. Servedio
PublisherAssociation for Computing Machinery
Number of pages14
ISBN (Electronic)9781450399135
StatePublished - Jun 2 2023
Event55th Annual ACM Symposium on Theory of Computing, STOC 2023 - Orlando, United States
Duration: Jun 20 2023Jun 23 2023

Publication series

NameProceedings of the Annual ACM Symposium on Theory of Computing
ISSN (Print)0737-8017


Conference55th Annual ACM Symposium on Theory of Computing, STOC 2023
Country/TerritoryUnited States

All Science Journal Classification (ASJC) codes

  • Software


  • Learning parity
  • Quantum lower bounds
  • Time-space lower bounds


Dive into the research topics of 'Memory-Sample Lower Bounds for Learning with Classical-Quantum Hybrid Memory'. Together they form a unique fingerprint.

Cite this