Triply Robust Off-Policy Evaluation
- Creators
- Liu, Anqi
- Liu, Hao
- Anandkumar, Anima
-
Yue, Yisong
Abstract
We propose a robust regression approach to off-policy evaluation (OPE) for contextual bandits. We frame OPE as a covariate-shift problem and leverage modern robust regression tools. Ours is a general approach that can be used to augment any existing OPE method that utilizes the direct method. When augmenting doubly robust methods, we call the resulting method Triply Robust. We prove upper bounds on the resulting bias and variance, as well as derive novel minimax bounds based on robust minimax analysis for covariate shift. Our robust regression method is compatible with deep learning, and is thus applicable to complex OPE settings that require powerful function approximators. Finally, we demonstrate superior empirical performance across the standard OPE benchmarks, especially in the case where the logging policy is unknown and must be estimated from data.
Additional Information
Prof. Anandkumar is supported by Bren endowed Chair, faculty awards from Microsoft, Google, and Adobe, DARPA PAI and LwLL grants. Anqi Liu is a PIMCO postdoctoral fellow at Caltech.Attached Files
Submitted - 1911.05811.pdf
Files
Name | Size | Download all |
---|---|---|
md5:ae4cccb939e7996031b6a4eddc270222
|
645.2 kB | Preview Download |
Additional details
- Eprint ID
- 100578
- Resolver ID
- CaltechAUTHORS:20200109-085907638
- Bren Professor of Computing and Mathematical Sciences
- Microsoft
- Adobe
- Defense Advanced Research Projects Agency (DARPA)
- Caltech PIMCO Graduate Fellowship
- Created
-
2020-01-09Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field