Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published February 14, 2020 | Accepted Version
Report Open

The Power of Linear Controllers in LQR Control

Abstract

The Linear Quadratic Regulator (LQR) framework considers the problem of regulating a linear dynamical system perturbed by environmental noise. We compute the policy regret between three distinct control policies: i) the optimal online policy, whose linear structure is given by the Ricatti equations; ii) the optimal offline linear policy, which is the best linear state feedback policy given the noise sequence; and iii) the optimal offline policy, which selects the globally optimal control actions given the noise sequence. We fully characterize the optimal offline policy and show that it has a recursive form in terms of the optimal online policy and future disturbances. We also show that cost of the optimal offline linear policy converges to the cost of the optimal online policy as the time horizon grows large, and consequently the optimal offline linear policy incurs linear regret relative to the optimal offline policy, even in the optimistic setting where the noise is drawn i.i.d from a known distribution. Although we focus on the setting where the noise is stochastic, our results also imply new lower bounds on the policy regret achievable when the noise is chosen by an adaptive adversary.

Additional Information

© 2020 G. Goel & B. Hassibi. To appear in Proceedings of Machine Learning Research.

Attached Files

Accepted Version - 2002.02574.pdf

Files

2002.02574.pdf
Files (190.5 kB)
Name Size Download all
md5:e8a2bf39cbdf705a8a6e5520836478e0
190.5 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 19, 2023