Abstract
Recently there has been much interest in modeling the activity of primate midbrain dopamine neurons as signalling reward prediction error. But since the models are based on temporal-difference (TD) learning, they assume an exponential decline with time in the value of delayed reinforcers, an assumption long known to conflict with animal behavior. We show that a variant of TD learning that tracks variations in the average reward per timestep rather than cumulative discounted reward preserves the models' success at explaining neurophysiological data while significantly increasing their applicability to behavioral data. (C) 2000 Published by Elsevier Science B.V. All rights reserved.
Original language | English (US) |
---|---|
Pages (from-to) | 679-684 |
Number of pages | 6 |
Journal | Neurocomputing |
Volume | 32-33 |
DOIs | |
State | Published - Jun 2000 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence
Keywords
- Dopamine
- Exponential discounting
- Temporal-difference learning