Over-parameterized adversarial training: An analysis overcoming the curse of dimensionality

Yi Zhang, Orestis Plevrakis, Simon S. Du, Xingguo Li, Zhao Song, Sanjeev Arora

Research output: Contribution to journalConference articlepeer-review

Abstract

Adversarial training is a popular method to give neural nets robustness against adversarial perturbations. In practice adversarial training leads to low robust training loss. However, a rigorous explanation for why this happens under natural conditions is still missing. Recently a convergence theory for standard (non-adversarial) training was developed by various groups for very over-parametrized nets. It is unclear how to extend these results to adversarial training because of the min-max objective. Recently, a first step towards this direction was made by [14] using tools from online learning, but they require the width of the net and the running time to be exponential in input dimension d, and they consider an activation function that is not used in practice. Our work proves convergence to low robust training loss for polynomial width and running time, instead of exponential, under natural assumptions and with ReLU activation. Key element of our proof is showing that ReLU networks near initialization can approximate the step function, which may be of independent interest.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: Dec 6 2020Dec 12 2020

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Over-parameterized adversarial training: An analysis overcoming the curse of dimensionality'. Together they form a unique fingerprint.

Cite this