Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published January 9, 2020 | Submitted
Report Open

Triply Robust Off-Policy Evaluation

Abstract

We propose a robust regression approach to off-policy evaluation (OPE) for contextual bandits. We frame OPE as a covariate-shift problem and leverage modern robust regression tools. Ours is a general approach that can be used to augment any existing OPE method that utilizes the direct method. When augmenting doubly robust methods, we call the resulting method Triply Robust. We prove upper bounds on the resulting bias and variance, as well as derive novel minimax bounds based on robust minimax analysis for covariate shift. Our robust regression method is compatible with deep learning, and is thus applicable to complex OPE settings that require powerful function approximators. Finally, we demonstrate superior empirical performance across the standard OPE benchmarks, especially in the case where the logging policy is unknown and must be estimated from data.

Additional Information

Prof. Anandkumar is supported by Bren endowed Chair, faculty awards from Microsoft, Google, and Adobe, DARPA PAI and LwLL grants. Anqi Liu is a PIMCO postdoctoral fellow at Caltech.

Attached Files

Submitted - 1911.05811.pdf

Files

1911.05811.pdf
Files (645.2 kB)
Name Size Download all
md5:ae4cccb939e7996031b6a4eddc270222
645.2 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023