We study a simple model for the statistics of neural spike trains as they encode a continuously varying signal. The model is motivated with reference to several recent experiments on sensory neurons, and we show how analogies between the relevant probabilistic issues in neural coding and statistical mechanics can be exploited. Results are given for the information capacity of the code, for the optimal structure of code-reading algorithms, and for the time delays which arise in optimal processing of the coded signal. In addition, we show how simple analog computations can be expressed directly in terms of transformations of the spike train. The rules for reading the code and for optimal analog computation depend on the context for behavioral decision making-the relative weights assigned to different types of errors, the relative importance of different signals. We find that there is a conflict between minimizing this context dependence of the code and maximizing its information capacity; a compromise can be achieved by appropriate preprocessing (filtering) of the encoded signal. Experiments on auditory and visual neurons qualitatively confirm the predicted filtering. Similarly, the structure of the optimal "multiplier neuron" is shown to depend upon the intensity and spectral content of incoming signals, and these predictions compare favorably with experiments on a movement-sensitive cell in the fly visual system.
All Science Journal Classification (ASJC) codes
- Statistical and Nonlinear Physics
- Mathematical Physics
- Neural networks
- analog computation
- signal processing