UNDERSTANDING INFLUENCE FUNCTIONS AND DATAMODELS VIA HARMONIC ANALYSIS

Nikunj Saunshi, Arushi Gupta, Mark Braverman, Sanjeev Arora

Research output: Contribution to conferencePaperpeer-review

4 Scopus citations

Abstract

Influence functions estimate effect of individual data points on predictions of the model on test data and were adapted to deep learning in Koh & Liang (2017). They have been used for detecting data poisoning, detecting helpful and harmful examples, influence of groups of datapoints, etc. Recently, Ilyas et al. (2022) introduced a linear regression method they termed datamodels to predict the effect of training points on outputs on test data. The current paper seeks to provide a better theoretical understanding of such interesting empirical phenomena. The primary tool is harmonic analysis and the idea of noise stability. Contributions include: (a) Exact characterization of the learnt datamodel in terms of Fourier coefficients. (b) An efficient method to estimate the residual error and quality of the optimum linear datamodel without having to train the datamodel. (c) New insights into when influences of groups of datapoints may or may not add up linearly.

Original languageEnglish (US)
StatePublished - 2023
Event11th International Conference on Learning Representations, ICLR 2023 - Kigali, Rwanda
Duration: May 1 2023May 5 2023

Conference

Conference11th International Conference on Learning Representations, ICLR 2023
Country/TerritoryRwanda
CityKigali
Period5/1/235/5/23

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Computer Science Applications
  • Education
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'UNDERSTANDING INFLUENCE FUNCTIONS AND DATAMODELS VIA HARMONIC ANALYSIS'. Together they form a unique fingerprint.

Cite this