The Eighty Five Percent Rule for optimal learning

Robert C. Wilson, Amitai Shenhav, Mark Straccia, Jonathan D. Cohen

Research output: Contribution to journalArticlepeer-review

65 Scopus citations

Abstract

Researchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.

Original languageEnglish (US)
Article number4646
JournalNature communications
Volume10
Issue number1
DOIs
StatePublished - Dec 1 2019

All Science Journal Classification (ASJC) codes

  • General Chemistry
  • General Biochemistry, Genetics and Molecular Biology
  • General Physics and Astronomy

Fingerprint

Dive into the research topics of 'The Eighty Five Percent Rule for optimal learning'. Together they form a unique fingerprint.

Cite this