Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 2020 | Submitted + Published
Journal Article Open

Dueling Posterior Sampling for Preference-Based Reinforcement Learning

Abstract

In preference-based reinforcement learning (RL), an agent interacts with the environment while receiving preferences instead of absolute feedback. While there is increasing research activity in preference-based RL, the design of formal frameworks that admit tractable theoretical analysis remains an open challenge. Building upon ideas from preference-based bandit learning and posterior sampling in RL, we present DUELING POSTERIOR SAMPLING (DPS), which employs preference-based posterior sampling to learn both the system dynamics and the underlying utility function that governs the preference feedback. As preference feedback is provided on trajectories rather than individual state-action pairs, we develop a Bayesian approach for the credit assignment problem, translating preferences to a posterior distribution over state-action reward models. We prove an asymptotic Bayesian no-regret rate for DPS with a Bayesian linear regression credit assignment model. This is the first regret guarantee for preference-based RL to our knowledge. We also discuss possible avenues for extending the proof methodology to other credit assignment models. Finally, we evaluate the approach empirically, showing competitive performance against existing baselines.

Additional Information

© 2021 by the author(s). This work was supported by NIH grant EB007615 and an Amazon graduate fellowship.

Attached Files

Published - novoseller20a.pdf

Submitted - 1908.01289.pdf

Files

novoseller20a.pdf
Files (2.0 MB)
Name Size Download all
md5:736ed9f14a67e787f3c123881bdf3a6c
738.3 kB Preview Download
md5:f9fb2609798916a580ccb81d09502404
1.3 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023