Constructing hard functions using learning algorithms

Adam Klivans, Pravesh Kothari, Igor C. Oliveira

Research output: Chapter in Book/Report/Conference proceedingConference contribution

24 Scopus citations

Abstract

Fort now and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class of boolean circuits C contained in P/poly of Boolean is exactly learnable with membership and equivalence queries in polynomial-time, then EXP^NP is not contained in C (the class EXP^NP was subsequently improved to EXP by Hitchcock and Harkins). In this paper, we improve on these results and show * If C is exactly learnable with membership and equivalence queries in polynomial-time, then DTIME(n^{\omega(1)}) is not contained in C. We obtain even stronger consequences if C is learnable in the mistake-bounded model, in which case we prove an average-case hardness result against C. * If C is learnable in polynomial time in the PAC model then PSPACE is not contained in C, unless PSPACE is contained in BPP. Removing this extra assumption from the statement of the theorem would provide an unconditional separation of PSPACE and BPP. * If C is efficiently learnable in the Correlational Statistical Query (CSQ) model, we show that there exists an explicit function f that is average-case hard for circuits in C. This result provides stronger average-case hardness guarantees than those obtained by SQ-dimension arguments (Blum et al. 1993). We also obtain a non-constructive extension of this result to the stronger Statistical Query (SQ) model. Similar results hold in the case where the learning algorithm runs in sub exponential time. Our proofs regarding exact and mistake-bounded learning are simple and self-contained, yield explicit hard functions, and show how to use mistake-bounded learners to 'diagonalize'' over families of polynomial-size circuits. Our consequences for PAC learning lead to new proofs of Karp-Lipton-style collapse results, and the lower bounds from SQ learning make use of recent work relating combinatorial discrepancy to the existence of hard-on-average functions.

Original languageEnglish (US)
Title of host publicationProceedings - 2013 IEEE Conference on Computational Complexity, CCC 2013
Pages86-97
Number of pages12
DOIs
StatePublished - 2013
Externally publishedYes
Event2013 IEEE Conference on Computational Complexity, CCC 2013 - Palo Alto, CA, United States
Duration: Jun 5 2013Jun 7 2013

Publication series

NameProceedings of the Annual IEEE Conference on Computational Complexity
ISSN (Print)1093-0159

Other

Other2013 IEEE Conference on Computational Complexity, CCC 2013
Country/TerritoryUnited States
CityPalo Alto, CA
Period6/5/136/7/13

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Computational Mathematics

Keywords

  • Circuits
  • Discrepancy
  • Karp-Lipton
  • Learning
  • Lower Bounds
  • SQ Learning

Fingerprint

Dive into the research topics of 'Constructing hard functions using learning algorithms'. Together they form a unique fingerprint.

Cite this