Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 16, 2023 | Submitted
Report Open

Convergence Rates for Localized Actor-Critic in Networked Markov Potential Games

Abstract

We introduce a class of networked Markov potential games where agents are associated with nodes in a network. Each agent has its own local potential function, and the reward of each agent depends only on the states and actions of agents within a κ-hop neighborhood. In this context, we propose a localized actor-critic algorithm. The algorithm is scalable since each agent uses only local information and does not need access to the global state. Further, the algorithm overcomes the curse of dimensionality through the use of function approximation. Our main results provide finite-sample guarantees up to a localization error and a function approximation error. Specifically, we achieve an O̅(ϵ⁻⁴) sample complexity measured by the averaged Nash regret. This is the first finite-sample bound for multi-agent competitive games that does not depend on the number of agents.

Additional Information

Attribution 4.0 International (CC BY 4.0)

Attached Files

Submitted - 2303.04865.pdf

Files

2303.04865.pdf
Files (768.0 kB)
Name Size Download all
md5:0db81a3cc40d082d2fe0904519c449b4
768.0 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 25, 2023