Implicit bias of gradient descent on linear convolutional networks

Suriya Gunasekar, Jason D. Lee, Nathan Srebro, Daniel Soudry

Research output: Contribution to journalConference articlepeer-review

145 Scopus citations


We show that gradient descent on full width linear convolutional networks of depth L converges to a linear predictor related to the `2/L bridge penalty in the frequency domain. This is in contrast to fully connected linear networks, where regardless of depth, gradient descent converges to the `2 maximum margin solution.

Original languageEnglish (US)
Pages (from-to)9461-9471
Number of pages11
JournalAdvances in Neural Information Processing Systems
StatePublished - 2018
Externally publishedYes
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: Dec 2 2018Dec 8 2018

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing


Dive into the research topics of 'Implicit bias of gradient descent on linear convolutional networks'. Together they form a unique fingerprint.

Cite this