Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 2020 | Accepted Version
Book Section - Chapter Open

Expert Selection in High-Dimensional Markov Decision Processes

Abstract

In this work we present a multi-armed bandit framework for online expert selection in Markov decision processes and demonstrate its use in high-dimensional settings. Our method takes a set of candidate expert policies and switches between them to rapidly identify the best performing expert using a variant of the classical upper confidence bound algorithm, thus ensuring low regret in the overall performance of the system. This is useful in applications where several expert policies may be available, and one needs to be selected at run-time for the underlying environment.

Additional Information

© 2020 IEEE.

Attached Files

Accepted Version - 2010.15599.pdf

Files

2010.15599.pdf
Files (765.8 kB)
Name Size Download all
md5:96439782b31e8697f198b1639c87b701
765.8 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 20, 2023