Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 7, 2022 | Submitted
Report Open

Stability Constrained Reinforcement Learning for Real-Time Voltage Control

Abstract

Deep reinforcement learning (RL) has been recognized as a promising tool to address the challenges in real-time control of power systems. However, its deployment in real-world power systems has been hindered by a lack of formal stability and safety guarantees. In this paper, we propose a stability constrained reinforcement learning method for real-time voltage control in distribution grids and we prove that the proposed approach provides a formal voltage stability guarantee. The key idea underlying our approach is an explicitly constructed Lyapunov function that certifies stability. We demonstrate the effectiveness of the approach in case studies, where the proposed method can reduce the transient control cost by more than 30\% and shorten the response time by a third compared to a widely used linear policy, while always achieving voltage stability. In contrast, standard RL methods often fail to achieve voltage stability.

Additional Information

Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)

Attached Files

Submitted - 2109.14854.pdf

Files

2109.14854.pdf
Files (954.5 kB)
Name Size Download all
md5:805d742e9bc2aca13b8b1f97b5bb6c56
954.5 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 23, 2023