Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 7, 2017 | Submitted
Report Open

Nonparametric Learning Rules from Bandit Experiments: The Eyes have it!

Abstract

How do people learn? We assess, in a distribution-free manner, subjects' learning and choice rules in dynamic two-armed bandit learning experiments. To aid in identification and estimation, we use auxiliary measures of subjects' beliefs, in the form of their eye-movements during the experiment. Our estimated choice probabilities and learning rules have some distinctive features; notably that subjects tend to update in a non-smooth manner following choices made in accordance with current beliefs. Moreover, the beliefs implied by our nonparametric learning rules are closer to those from a (non-Bayesian) reinforcement learning model, than a Bayesian learning model.

Additional Information

We are indebted to Antonio Rangel for his encouragement and for the funding and use of facilities in his lab. We thank Dan Ackerberg, Peter Bossaerts, Colin Camerer, Andrew Ching, Cary Frydman, Ian Krajbich, Pietro Ortoleva, and participants in presentations at U. Arizona, Caltech, UCLA, U. Washington and Choice Symposium 2010 (Key Largo) for comments and suggestions. Published in Games and Economic Behavior, 81, 215-231.

Attached Files

Submitted - sswp1326.pdf

Files

sswp1326.pdf
Files (594.3 kB)
Name Size Download all
md5:3e6962089473045c1c198c127b56258d
594.3 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
January 13, 2024