Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 14, 2021 | public
Book Section - Chapter

Model Learning Predictive Control in Nonlinear Dynamical Systems

Abstract

We study the problem of online learning and control in partially observable nonlinear dynamical systems, where the model dynamics are unknown and the controlling agent has only access to the system outputs. We propose Model Learning Predictive Control (MLPC), an efficient online control framework that learns to control the unknown system and minimizes the overall control cost. MLPC employs Random Fourier Features (RFF) to represent the nonlinear system dynamics and learns the underlying system up to a confidence interval. Once a reliable estimate of the dynamics is obtained, MLPC deploys an MPC oracle with the estimated system dynamics for planning. MLPC occasionally updates the underlying model estimates and improves the accuracy and the effectiveness of the MPC policies. We derive a novel finite-time approximation error bound under RFF learning and provide stability guarantees for single trajectory online control. We show that MLPC attains O̅(T^(2/3)) regret after T time steps in online control of stable partially observable nonlinear systems against the controller that uses the same MPC oracle with the true system dynamics. We empirically demonstrate the performance of MLPC on the inverted pendulum task and show the flexibility of the proposed general framework via deploying different planning strategies for the controller design to achieve low-cost control policies.

Additional Information

© 2021 IEEE.

Additional details

Created:
August 20, 2023
Modified:
October 23, 2023