Learning deep taxonomic priors for concept learning from few positive examples

Erin Grant, Joshua C. Peterson, Thomas L. Griffiths

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Human concept learning is surprisingly robust, allowing for precise generalizations given only a few positive examples. Bayesian formulations that account for this behavior require elaborate, pre-specified priors, leaving much of the learning process unexplained. More recent models of concept learning bootstrap from deep representations, but the deep neural networks are themselves trained using millions of positive and negative examples. In machine learning, recent progress in meta-learning has provided large-scale learning algorithms that can learn new concepts from a few examples, but these approaches still assume access to implicit negative evidence. In this paper, we formulate a training paradigm that allows a meta-learning algorithm to solve the problem of concept learning from few positive examples. The algorithm discovers a taxonomic prior useful for learning novel concepts even from held-out supercategories and mimics human generalization behavior-the first to do so without hand-specified domain knowledge or negative examples of a novel concept.

Original languageEnglish (US)
Title of host publicationProceedings of the 41st Annual Meeting of the Cognitive Science Society
Subtitle of host publicationCreativity + Cognition + Computation, CogSci 2019
PublisherThe Cognitive Science Society
Pages1865-1870
Number of pages6
ISBN (Electronic)0991196775, 9780991196777
StatePublished - 2019
Event41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019 - Montreal, Canada
Duration: Jul 24 2019Jul 27 2019

Publication series

NameProceedings of the 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019

Conference

Conference41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019
Country/TerritoryCanada
CityMontreal
Period7/24/197/27/19

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Science Applications
  • Human-Computer Interaction
  • Cognitive Neuroscience

Keywords

  • concept learning
  • deep neural networks
  • object taxonomies

Fingerprint

Dive into the research topics of 'Learning deep taxonomic priors for concept learning from few positive examples'. Together they form a unique fingerprint.

Cite this