TY - GEN
T1 - Learning deep taxonomic priors for concept learning from few positive examples
AU - Grant, Erin
AU - Peterson, Joshua C.
AU - Griffiths, Thomas L.
N1 - Publisher Copyright:
© Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019.All rights reserved.
PY - 2019
Y1 - 2019
N2 - Human concept learning is surprisingly robust, allowing for precise generalizations given only a few positive examples. Bayesian formulations that account for this behavior require elaborate, pre-specified priors, leaving much of the learning process unexplained. More recent models of concept learning bootstrap from deep representations, but the deep neural networks are themselves trained using millions of positive and negative examples. In machine learning, recent progress in meta-learning has provided large-scale learning algorithms that can learn new concepts from a few examples, but these approaches still assume access to implicit negative evidence. In this paper, we formulate a training paradigm that allows a meta-learning algorithm to solve the problem of concept learning from few positive examples. The algorithm discovers a taxonomic prior useful for learning novel concepts even from held-out supercategories and mimics human generalization behavior-the first to do so without hand-specified domain knowledge or negative examples of a novel concept.
AB - Human concept learning is surprisingly robust, allowing for precise generalizations given only a few positive examples. Bayesian formulations that account for this behavior require elaborate, pre-specified priors, leaving much of the learning process unexplained. More recent models of concept learning bootstrap from deep representations, but the deep neural networks are themselves trained using millions of positive and negative examples. In machine learning, recent progress in meta-learning has provided large-scale learning algorithms that can learn new concepts from a few examples, but these approaches still assume access to implicit negative evidence. In this paper, we formulate a training paradigm that allows a meta-learning algorithm to solve the problem of concept learning from few positive examples. The algorithm discovers a taxonomic prior useful for learning novel concepts even from held-out supercategories and mimics human generalization behavior-the first to do so without hand-specified domain knowledge or negative examples of a novel concept.
KW - concept learning
KW - deep neural networks
KW - object taxonomies
UR - http://www.scopus.com/inward/record.url?scp=85098424386&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098424386&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85098424386
T3 - Proceedings of the 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019
SP - 1865
EP - 1870
BT - Proceedings of the 41st Annual Meeting of the Cognitive Science Society
PB - The Cognitive Science Society
T2 - 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019
Y2 - 24 July 2019 through 27 July 2019
ER -