Exploiting Myopic Learning
- Creators
- Mostagir, Mohamed
Abstract
I develop a framework in which a principal can exploit myopic social learning in a population of agents in order to implement social or selfish outcomes that would not be possible under the traditional fully-rational agent model. Learning in this framework takes a simple form of imitation, or replicator dynamics, a class of learning dynamics that often leads the population to converge to a Nash equilibrium of the underlying game. To illustrate the approach, I give a wide class of games for which the principal can always obtain strictly better outcomes than the corresponding Nash solution and show how such outcomes can be implemented. The framework is general enough to accommodate many scenarios, and powerful enough to generate predictions that agree with empirically-observed behavior.
Additional Information
I thank John Ledyard, R. Preston McAfee, Thomas Palfrey, and Jean-Laurent Rosenthal for their comments and suggestions. I also enjoyed many useful discussions with Dustin Beckett and Théophane Weber.Attached Files
Submitted - sswp1341.pdf
Files
Name | Size | Download all |
---|---|---|
md5:dcba2dbd2934e415aac9f66986af8807
|
714.4 kB | Preview Download |
Additional details
- Eprint ID
- 79367
- Resolver ID
- CaltechAUTHORS:20170725-172657983
- Created
-
2017-08-07Created from EPrint's datestamp field
- Updated
-
2019-10-03Created from EPrint's last_modified field
- Caltech groups
- Social Science Working Papers
- Series Name
- Social Science Working Paper
- Series Volume or Issue Number
- 1341