Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published January 9, 2020 | Submitted
Report Open

Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning

An error occurred while generating the citation.

Abstract

Off-policy policy evaluation (OPE) is the problem of estimating the online performance of a policy using only pre-collected historical data generated by another policy. Given the increasing interest in deploying learning-based methods for safety-critical applications, many recent OPE methods have recently been proposed. Due to disparate experimental conditions from recent literature, the relative performance of current OPE methods is not well understood. In this work, we present the first comprehensive empirical analysis of a broad suite of OPE methods. Based on thousands of experiments and detailed empirical analyses, we offer a summarized set of guidelines for effectively using OPE in practice, and suggest directions for future research.

Attached Files

Submitted - 1911.06854.pdf

Files

1911.06854.pdf
Files (2.7 MB)
Name Size Download all
md5:84ff2c8cdec5af2764ebec14fa3df3c2
2.7 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023