Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 2020 | Accepted Version
Journal Article Open

Beyond dichotomies in reinforcement learning

Abstract

Reinforcement learning (RL) is a framework of particular importance to psychology, neuroscience and machine learning. Interactions between these fields, as promoted through the common hub of RL, has facilitated paradigm shifts that relate multiple levels of analysis in a singular framework (for example, relating dopamine function to a computationally defined RL signal). Recently, more sophisticated RL algorithms have been proposed to better account for human learning, and in particular its oft-documented reliance on two separable systems: a model-based (MB) system and a model-free (MF) system. However, along with many benefits, this dichotomous lens can distort questions, and may contribute to an unnecessarily narrow perspective on learning and decision-making. Here, we outline some of the consequences that come from overconfidently mapping algorithms, such as MB versus MF RL, with putative cognitive processes. We argue that the field is well positioned to move beyond simplistic dichotomies, and we propose a means of refocusing research questions towards the rich and complex components that comprise learning and decision-making.

Additional Information

© 2020 Nature Publishing Group. Accepted 20 July 2020; Published 01 September 2020; Issue Date October 2020. Author Contributions: The authors contributed equally to all aspects of the article. The authors declare no competing interests. Peer review information: Nature Reviews Neuroscience thanks the anonymous reviewer(s) for their contribution to the peer review of this work.

Attached Files

Accepted Version - nihms-1656813.pdf

Files

nihms-1656813.pdf
Files (955.0 kB)
Name Size Download all
md5:00609d0b355d76140cd99695037a3d70
955.0 kB Preview Download

Additional details

Created:
September 22, 2023
Modified:
October 23, 2023