Leakage and the reproducibility crisis in machine-learning-based science

Sayash Kapoor, Arvind Narayanan

Research output: Contribution to journalArticlepeer-review

18 Scopus citations


Machine-learning (ML) methods have gained prominence in the quantitative sciences. However, there are many known methodological pitfalls, including data leakage, in ML-based science. We systematically investigate reproducibility issues in ML-based science. Through a survey of literature in fields that have adopted ML methods, we find 17 fields where leakage has been found, collectively affecting 294 papers and, in some cases, leading to wildly overoptimistic conclusions. Based on our survey, we introduce a detailed taxonomy of eight types of leakage, ranging from textbook errors to open research problems. We propose that researchers test for each type of leakage by filling out model info sheets, which we introduce. Finally, we conduct a reproducibility study of civil war prediction, where complex ML models are believed to vastly outperform traditional statistical models such as logistic regression (LR). When the errors are corrected, complex ML models do not perform substantively better than decades-old LR models.

Original languageEnglish (US)
Article number100804
Issue number9
StatePublished - Sep 8 2023

All Science Journal Classification (ASJC) codes

  • General Decision Sciences


  • DSML 3: Development/Pre-production: Data science output has been rolled out/validated across multiple domains/problems
  • leakage
  • machine learning
  • reproducibility


Dive into the research topics of 'Leakage and the reproducibility crisis in machine-learning-based science'. Together they form a unique fingerprint.

Cite this