Abstract
Reinforcement learning is a fundamental process by which organisms learn to achieve goals from their interactions with the environment. Using evolutionary computation techniques we evolve (near-)optimal neuronal learning rules in a simple neural network model of reinforcement learning in bumblebees foraging for nectar. The resulting neural networks exhibit efficient reinforcement learning, allowing the bees to respond rapidly to changes in reward contingencies. The evolved synaptic plasticity dynamics give rise to varying exploration/exploitation levels and to the well-documented choice strategies of risk aversion and probability matching. Additionally, risk aversion is shown to emerge even when bees are evolved in a completely risk-less environment. In contrast to existing theories in economics and game theory, risk-averse behavior is shown to be a direct consequence of (near-)optimal reinforcement learning, without requiring additional assumptions such as the existence of a nonlinear subjective utility function for rewards. Our results are corroborated by a rigorous mathematical analysis, and their robustness in real-world situations is supported by experiments in a mobile robot. Thus we provide a biologically founded, parsimonious, and novel explanation for risk aversion and probability matching.
Original language | English (US) |
---|---|
Pages (from-to) | 5-24 |
Number of pages | 20 |
Journal | Adaptive Behavior |
Volume | 10 |
Issue number | 1 |
DOIs | |
State | Published - 2002 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Experimental and Cognitive Psychology
- Philosophy
- Artificial Intelligence
Keywords
- Dopamine
- Evolutionary computation
- Heterosynaptic plasticity
- Neuromodulation
- Probability matching
- Reinforcement learning
- Risk aversion