Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 2018 | public
Journal Article

Computational approaches to habits in a model-free world

Abstract

Model-free (MF) reinforcement learning (RL) algorithms account for a wealth of neuroscientific and behavioral data pertinent to habits; however, conspicuous disparities between model-predicted response patterns and experimental data have exposed the inadequacy of MF-RL to fully capture the domain of habitual behavior. We review several extensions to generic MF-RL algorithms that could narrow the gap between theory and empirical data. We discuss insights gained from extending RL algorithms to operate in complex environments with multidimensional continuous state spaces. We also review recent advances in hierarchical RL and their potential relevance to habits. Neurobiological evidence suggests that similar mechanisms for habitual learning and control may apply across diverse psychological domains.

Additional Information

© 2017 Elsevier Ltd. Available online 20 December 2017. This work was supported by NIDA-NIH R01 grant (1R01DA040011-01A1). The authors would like to thank all members of the O'Doherty Human Reward and Decision Making laboratory for intriguing discussions.

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023