Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 14, 2020 | Submitted
Report Open

Distributed Reinforcement Learning in Multi-Agent Networked Systems

Abstract

We study distributed reinforcement learning (RL) for a network of agents. The objective is to find localized policies that maximize the (discounted) global reward. In general, scalability is a challenge in this setting because the size of the global state/action space can be exponential in the number of agents. Scalable algorithms are only known in cases where dependencies are local, e.g., between neighbors. In this work, we propose a Scalable Actor Critic framework that applies in settings where the dependencies are non-local and provide a finite-time error bound that shows how the convergence rate depends on the depth of the dependencies in the network. Additionally, as a byproduct of our analysis, we obtain novel finite-time convergence results for a general stochastic approximation scheme and for temporal difference learning with state aggregation that apply beyond the setting of RL in networked systems.

Additional Information

We see no ethical concerns related to this paper.

Attached Files

Submitted - 2006.06555.pdf

Files

2006.06555.pdf
Files (766.0 kB)
Name Size Download all
md5:fb18563c2f03a2d7df9508ae2a49dd12
766.0 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023