Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 29, 2016 | public
Book Section - Chapter

Localized LQR with adaptive constraint and performance guarantee

Abstract

In previous work, we proposed the localized linear quadratic regulator (LLQR) method as a scalable way to synthesize and implement distributed controllers for large-scale systems. The idea is to impose an additional spatiotemporal constraint on the closed loop response, which limits the propagation of dynamics to user-specified subsets of the global network. This then allows the controller to be synthesized and implemented in a localized, distributed, parallel, and thus scalable way. Nevertheless, the additional spatiotemporal constraint also makes the LLQR controller sub-optimal to the traditional centralized one. The goal of this paper is to quantify and bound the sub-optimality of the LLQR controller introduced by the additional spatiotemporal constraint. Specifically, we propose an algorithm to compute a lower bound of the cost achieved by the centralized controller using only local plant model information. This allows us to determine the sub-optimality of the LLQR controller in a localized way, and adaptively update the LLQR constraint to exploit the tradeoff between controller complexity and closed loop performance. The algorithm is tested on a randomized heterogeneous network with 51200 states, where the LLQR controller achieves at least 99% optimality compared to the unconstrained centralized controller.

Additional Information

© 2016 IEEE. This research was in part supported by NSF NetSE, AFOSR, the Institute for Collaborative Biotechnologies through grant W911NF-09-0001 from the U.S. Army Research Office, and from MURIs "Scalable, Data-Driven, and Provably-Correct Analysis of Networks" (ONR) and "Tools for the Analysis and Design of Complex Multi-Scale Networks" (ARO). The content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. The author would like to thank John C. Doyle and Nikolai Matni for the discussion of this work, and Andrew Lamperski for providing the code in [4].

Additional details

Created:
August 19, 2023
Modified:
October 24, 2023