Published September 1992
| Published
Journal Article
Open
A Class of Bandit Problems Yielding Myopic Optimal Strategies
- Creators
- Banks, Jeffrey S.
- Sundaram, Rangarajan K.
Chicago
Abstract
We consider the class of bandit problems in which each of the n ≧ 2 independent arms generates rewards according to one of the same two reward distributions, and discounting is geometric over an infinite horizon. We show that the dynamic allocation index of Gittins and Jones (1974) in this context is strictly increasing in the probability that an arm is the better of the two distributions. It follows as an immediate consequence that myopic strategies are the uniquely optimal strategies in this class of bandit problems, regardless of the value of the discount parameter or the shape of the reward distributions. Some implications of this result for bandits with Bernoulli reward distributions are given.
Additional Information
© 1992 Applied Probability Trust. Received 28 August 1990; revision received 8 May 1991. Financial support from the National Science Foundation and the Sloan Foundation to the first author is gratefully acknowledged.Attached Files
Published - 3214899.pdf
Files
3214899.pdf
Files
(9.7 MB)
Name | Size | Download all |
---|---|---|
md5:1b61ea3a5ef2142572381e9c0af0cb6c
|
9.7 MB | Preview Download |
Additional details
- Eprint ID
- 67332
- Resolver ID
- CaltechAUTHORS:20160525-080809749
- NSF
- Alfred P. Sloan Foundation
- Created
-
2016-05-26Created from EPrint's datestamp field
- Updated
-
2021-11-11Created from EPrint's last_modified field