TY - JOUR
T1 - Bayesian latent structure discovery from multi-neuron recordings
AU - Linderman, Scott W.
AU - Adams, Ryan P.
AU - Pillow, Jonathan W.
N1 - Funding Information:
We thank E. J. Chichilnisky, A. M. Litke, A. Sher and J. Shlens for retinal data. SWL is supported by the Simons Foundation SCGB-418011. RPA is supported by NSF IIS-1421780 and the Alfred P. Sloan Foundation. JWP was supported by grants from the McKnight Foundation, Simons Collaboration on the Global Brain (SCGB AWD1004351), NSF CAREER Award (IIS-1150186), and NIMH grant MH099611.
Publisher Copyright:
© 2016 NIPS Foundation - All Rights Reserved.
PY - 2016
Y1 - 2016
N2 - Neural circuits contain heterogeneous groups of neurons that differ in type, location, connectivity, and basic response properties. However, traditional methods for dimensionality reduction and clustering are ill-suited to recovering the structure underlying the organization of neural circuits. In particular, they do not take advantage of the rich temporal dependencies in multi-neuron recordings and fail to account for the noise in neural spike trains. Here we describe new tools for inferring latent structure from simultaneously recorded spike train data using a hierarchical extension of a multi-neuron point process model commonly known as the generalized linear model (GLM). Our approach combines the GLM with flexible graph-theoretic priors governing the relationship between latent features and neural connectivity patterns. Fully Bayesian inference via Pólya-gamma augmentation of the resulting model allows us to classify neurons and infer latent dimensions of circuit organization from correlated spike trains. We demonstrate the effectiveness of our method with applications to synthetic data and multi-neuron recordings in primate retina, revealing latent patterns of neural types and locations from spike trains alone.
AB - Neural circuits contain heterogeneous groups of neurons that differ in type, location, connectivity, and basic response properties. However, traditional methods for dimensionality reduction and clustering are ill-suited to recovering the structure underlying the organization of neural circuits. In particular, they do not take advantage of the rich temporal dependencies in multi-neuron recordings and fail to account for the noise in neural spike trains. Here we describe new tools for inferring latent structure from simultaneously recorded spike train data using a hierarchical extension of a multi-neuron point process model commonly known as the generalized linear model (GLM). Our approach combines the GLM with flexible graph-theoretic priors governing the relationship between latent features and neural connectivity patterns. Fully Bayesian inference via Pólya-gamma augmentation of the resulting model allows us to classify neurons and infer latent dimensions of circuit organization from correlated spike trains. We demonstrate the effectiveness of our method with applications to synthetic data and multi-neuron recordings in primate retina, revealing latent patterns of neural types and locations from spike trains alone.
UR - http://www.scopus.com/inward/record.url?scp=85019225034&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85019225034&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85019225034
SN - 1049-5258
SP - 2010
EP - 2018
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 30th Annual Conference on Neural Information Processing Systems, NIPS 2016
Y2 - 5 December 2016 through 10 December 2016
ER -