TY - GEN
T1 - Approximating Bayesian inference with a sparse distributed memory system
AU - Abbott, Joshua T.
AU - Hamrick, Jessica B.
AU - Griffiths, Thomas L.
N1 - Publisher Copyright:
© CogSci 2013.All rights reserved.
PY - 2013
Y1 - 2013
N2 - Probabilistic models of cognition have enjoyed recent success in explaining how people make inductive inferences. Yet, the difficult computations over structured representations that are often required by these models seem incompatible with the continuous and distributed nature of human minds. To reconcile this issue, and to understand the implications of constraints on probabilistic models, we take the approach of formalizing the mechanisms by which cognitive and neural processes could approximate Bayesian inference. Specifically, we show that an associative memory system using sparse, distributed representations can be reinterpreted as an importance sampler, a Monte Carlo method of approximating Bayesian inference. This capacity is illustrated through two case studies: a simple letter reconstruction task, and the classic problem of property induction. Broadly, our work demonstrates that probabilistic models can be implemented in a practical, distributed manner, and helps bridge the gap between algorithmic- and computational-level models of cognition.
AB - Probabilistic models of cognition have enjoyed recent success in explaining how people make inductive inferences. Yet, the difficult computations over structured representations that are often required by these models seem incompatible with the continuous and distributed nature of human minds. To reconcile this issue, and to understand the implications of constraints on probabilistic models, we take the approach of formalizing the mechanisms by which cognitive and neural processes could approximate Bayesian inference. Specifically, we show that an associative memory system using sparse, distributed representations can be reinterpreted as an importance sampler, a Monte Carlo method of approximating Bayesian inference. This capacity is illustrated through two case studies: a simple letter reconstruction task, and the classic problem of property induction. Broadly, our work demonstrates that probabilistic models can be implemented in a practical, distributed manner, and helps bridge the gap between algorithmic- and computational-level models of cognition.
KW - Bayesian inference
KW - associative memory models
KW - importance sampling
KW - rational process models
KW - sparse distributed memory
UR - http://www.scopus.com/inward/record.url?scp=85139525084&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139525084&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85139525084
T3 - Cooperative Minds: Social Interaction and Group Dynamics - Proceedings of the 35th Annual Meeting of the Cognitive Science Society, CogSci 2013
SP - 1686
EP - 1691
BT - Cooperative Minds
A2 - Knauff, Markus
A2 - Sebanz, Natalie
A2 - Pauen, Michael
A2 - Wachsmuth, Ipke
PB - The Cognitive Science Society
T2 - 35th Annual Meeting of the Cognitive Science Society - Cooperative Minds: Social Interaction and Group Dynamics, CogSci 2013
Y2 - 31 July 2013 through 3 August 2013
ER -