Published December 2020
| Accepted Version
Book Section - Chapter
Open
Expert Selection in High-Dimensional Markov Decision Processes
Chicago
Abstract
In this work we present a multi-armed bandit framework for online expert selection in Markov decision processes and demonstrate its use in high-dimensional settings. Our method takes a set of candidate expert policies and switches between them to rapidly identify the best performing expert using a variant of the classical upper confidence bound algorithm, thus ensuring low regret in the overall performance of the system. This is useful in applications where several expert policies may be available, and one needs to be selected at run-time for the underlying environment.
Additional Information
© 2020 IEEE.Attached Files
Accepted Version - 2010.15599.pdf
Files
2010.15599.pdf
Files
(765.8 kB)
Name | Size | Download all |
---|---|---|
md5:96439782b31e8697f198b1639c87b701
|
765.8 kB | Preview Download |
Additional details
- Eprint ID
- 110733
- Resolver ID
- CaltechAUTHORS:20210903-222215578
- Created
-
2021-09-07Created from EPrint's datestamp field
- Updated
-
2021-09-07Created from EPrint's last_modified field