Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed. We find out that the reason for this is the inconsistency between the enforced geometry and the standard softmax cross entropy loss. To resolve this, we propose a new framework for data-dependent DNN regularization, the Geometrically-Regularized-Self-Validating neural Networks (GRSVNet). During training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss consistent with the geometry. We study a particular case of GRSVNet, the Orthogonal-Low-rank Embedding (OLE)-GRSVNet, which is capable of producing highly discriminative features residing in orthogonal low-rank subspaces. Numerical experiments show that OLE-GRSVNet outperforms DNNs with conventional regularization when trained on real data, especially when the training samples are scarce. More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize random data or random labels, suggesting that it only learns intrinsic patterns by reducing the memorizing capacity of the baseline DNN.

Original languageEnglish (US)
Pages (from-to)476-496
Number of pages21
JournalSIAM Journal on Mathematics of Data Science
Volume1
Issue number3
DOIs
StatePublished - 2019
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Statistics and Probability
  • Computational Mathematics
  • Applied Mathematics

Keywords

  • data-dependent regularization
  • deep neural networks
  • intrinsic pattern learning
  • network memorization

Fingerprint

Dive into the research topics of 'Stop Memorizing: A Data-Dependent Regularization Framework for Intrinsic Pattern Learning'. Together they form a unique fingerprint.

Cite this