Exploiting Myopic Learning
- Creators
- Mostagir, Mohamed
- Other:
- Saberi, Amin
Abstract
We show how a principal can exploit myopic social learning in a population of agents in order to implement social or selfish outcomes that would not be possible under the traditional fully-rational agent model. Learning in our model takes a simple form of imitation, or replicator dynamics; a class of learning dynamics that often leads the population to converge to a Nash equilibrium of the underlying game. We show that, for a large class of games, the principal can always obtain strictly better outcomes than the corresponding Nash solution and explicitly specify how such outcomes can be implemented. The methods applied are general enough to accommodate many scenarios, and powerful enough to generate predictions that allude to some empirically-observed behavior.
Additional Information
© 2010 Springer-Verlag Berlin Heidelberg.Additional details
- Eprint ID
- 23902
- Resolver ID
- CaltechAUTHORS:20110603-140403915
- Created
-
2011-06-10Created from EPrint's datestamp field
- Updated
-
2019-10-03Created from EPrint's last_modified field
- Series Name
- Lecture Notes in Computer Science
- Series Volume or Issue Number
- 6484