Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 7, 2017 | Submitted
Report Open

Exploiting Myopic Learning

Abstract

I develop a framework in which a principal can exploit myopic social learning in a population of agents in order to implement social or selfish outcomes that would not be possible under the traditional fully-rational agent model. Learning in this framework takes a simple form of imitation, or replicator dynamics, a class of learning dynamics that often leads the population to converge to a Nash equilibrium of the underlying game. To illustrate the approach, I give a wide class of games for which the principal can always obtain strictly better outcomes than the corresponding Nash solution and show how such outcomes can be implemented. The framework is general enough to accommodate many scenarios, and powerful enough to generate predictions that agree with empirically-observed behavior.

Additional Information

I thank John Ledyard, R. Preston McAfee, Thomas Palfrey, and Jean-Laurent Rosenthal for their comments and suggestions. I also enjoyed many useful discussions with Dustin Beckett and Théophane Weber.

Attached Files

Submitted - sswp1341.pdf

Files

sswp1341.pdf
Files (714.4 kB)
Name Size Download all
md5:dcba2dbd2934e415aac9f66986af8807
714.4 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
January 13, 2024