Abstract
The field of computational reinforcement learning (RL) has proved extremely useful in research on human and animal behavior and brain function. However, the simple forms of RL considered in most empirical research do not scale well, making their relevance to complex, real-world behavior unclear. In computational RL, one strategy for addressing the scaling problem is to introduce hierarchical structure, an approach that has intriguing parallels with human behavior. We have begun to investigate the potential relevance of hierarchical RL (HRL) to human and animal behavior and brain function. In the present chapter, we first review two results that show the existence of neural correlates to key predictions from HRL. Then, we focus on one aspect of this work, which deals with the question of how action hierarchies are initially established. Work in HRL suggests that hierarchy learning is accomplished by identifying useful subgoal states, and that this might in turn be accomplished through a structural analysis of the given task domain.We review results from a set of behavioral and neuroimaging experiments, in which we have investigated the relevance of these ideas to human learning and decision making.
Original language | English (US) |
---|---|
Title of host publication | Computational and Robotic Models of the Hierarchical Organization of Behavior |
Publisher | Springer-Verlag Berlin Heidelberg |
Pages | 271-291 |
Number of pages | 21 |
Volume | 9783642398759 |
ISBN (Electronic) | 9783642398759 |
ISBN (Print) | 364239874X, 9783642398742 |
DOIs | |
State | Published - Jan 1 2013 |
All Science Journal Classification (ASJC) codes
- General Computer Science