TY - GEN
T1 - Extracting and Utilizing Abstract, Structured Representations for Analogy
AU - Frankland, Steven M.
AU - Webb, Taylor W.
AU - Petrov, Alexander A.
AU - O'Reilly, Randall C.
AU - Cohen, Jonathan D.
N1 - Publisher Copyright:
© Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019.All rights reserved.
PY - 2019
Y1 - 2019
N2 - Human analogical ability involves the re-use of abstract, structured representations within and across domains. Here, we present a generative neural network that completes analogies in a 1D metric space, without explicit training on analogy. Our model integrates two key ideas. First, it operates over representations inspired by properties of the mammalian Entorhinal Cortex (EC), believed to extract low-dimensional representations of the environment from the transition probabilities between states. Second, we show that a neural network equipped with a simple predictive objective and highly general inductive bias can learn to utilize these EC-like codes to compute explicit, abstract relations between pairs of objects. The proposed inductive bias favors a latent code that consists of anti-correlated representations. The relational representations learned by the model can then be used to complete analogies involving the signed distance between novel input pairs (1:3:: 5:? (7)), and extrapolate outside of the network's training domain. As a proof of principle, we extend the same architecture to more richly structured tree representations. We suggest that this combination of predictive, error-driven learning and simple inductive biases offers promise for deriving and utilizing the representations necessary for high-level cognitive functions, such as analogy.
AB - Human analogical ability involves the re-use of abstract, structured representations within and across domains. Here, we present a generative neural network that completes analogies in a 1D metric space, without explicit training on analogy. Our model integrates two key ideas. First, it operates over representations inspired by properties of the mammalian Entorhinal Cortex (EC), believed to extract low-dimensional representations of the environment from the transition probabilities between states. Second, we show that a neural network equipped with a simple predictive objective and highly general inductive bias can learn to utilize these EC-like codes to compute explicit, abstract relations between pairs of objects. The proposed inductive bias favors a latent code that consists of anti-correlated representations. The relational representations learned by the model can then be used to complete analogies involving the signed distance between novel input pairs (1:3:: 5:? (7)), and extrapolate outside of the network's training domain. As a proof of principle, we extend the same architecture to more richly structured tree representations. We suggest that this combination of predictive, error-driven learning and simple inductive biases offers promise for deriving and utilizing the representations necessary for high-level cognitive functions, such as analogy.
KW - abstract structured representations
KW - analogy
KW - neural networks
KW - predictive learning
KW - relational reasoning
UR - http://www.scopus.com/inward/record.url?scp=85078872513&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078872513&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85078872513
T3 - Proceedings of the 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019
SP - 1766
EP - 1772
BT - Proceedings of the 41st Annual Meeting of the Cognitive Science Society
PB - The Cognitive Science Society
T2 - 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019
Y2 - 24 July 2019 through 27 July 2019
ER -