TY - JOUR
T1 - Bayesian active learning of neural firing rate maps with transformed gaussian process priors
AU - Park, Mijung
AU - Patrick Weller, J.
AU - Horwitz, Gregory D.
AU - Pillow, Jonathan W.
N1 - Publisher Copyright:
© 2014 Massachusetts Institute of Technology.
PY - 2014/8/13
Y1 - 2014/8/13
N2 - A firing rate map, also known as a tuning curve, describes the nonlinear relationship between a neuron's spike rate and a low-dimensional stimulus (e.g., orientation, head direction, contrast, color). Here we investigate Bayesian active learning methods for estimating firing rate maps in closed-loop neurophysiology experiments. These methods can accelerate the characterization of such maps through the intelligent, adaptive selection of stimuli. Specifically,we explore the manner inwhich the prior and utility function used in Bayesian active learning affect stimulus selection and performance. Our approach relies on a flexible model that involves a nonlinearly transformed gaussian process (GP) prior over maps and conditionally Poisson spiking. We show that infomax learning, which selects stimuli to maximize the information gain about the firing rate map, exhibits strong dependence on the seemingly innocuous choice of nonlinear transformation function. We derive an alternate utility function that selects stimuli to minimize the average posterior variance of the firing rate map and analyze the surprising relationship between prior parameterization, stimulus selection, and active learning performance in GP-Poisson models. We apply these methods to color tuning measurements of neurons in macaque primary visual cortex.
AB - A firing rate map, also known as a tuning curve, describes the nonlinear relationship between a neuron's spike rate and a low-dimensional stimulus (e.g., orientation, head direction, contrast, color). Here we investigate Bayesian active learning methods for estimating firing rate maps in closed-loop neurophysiology experiments. These methods can accelerate the characterization of such maps through the intelligent, adaptive selection of stimuli. Specifically,we explore the manner inwhich the prior and utility function used in Bayesian active learning affect stimulus selection and performance. Our approach relies on a flexible model that involves a nonlinearly transformed gaussian process (GP) prior over maps and conditionally Poisson spiking. We show that infomax learning, which selects stimuli to maximize the information gain about the firing rate map, exhibits strong dependence on the seemingly innocuous choice of nonlinear transformation function. We derive an alternate utility function that selects stimuli to minimize the average posterior variance of the firing rate map and analyze the surprising relationship between prior parameterization, stimulus selection, and active learning performance in GP-Poisson models. We apply these methods to color tuning measurements of neurons in macaque primary visual cortex.
UR - http://www.scopus.com/inward/record.url?scp=84929249823&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84929249823&partnerID=8YFLogxK
U2 - 10.1162/NECO_a_00615
DO - 10.1162/NECO_a_00615
M3 - Article
C2 - 24877730
AN - SCOPUS:84929249823
SN - 0899-7667
VL - 26
SP - 1519
EP - 1541
JO - Neural computation
JF - Neural computation
IS - 8
ER -