Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 6, 2020 | Published + Supplemental Material
Journal Article Open

Convergence Analysis of Gradient-Based Learning in Continuous Games

Abstract

Considering a class of gradient-based multi-agent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their gradient and 2) stochastic settings with an unbiased estimator of their gradient. We also study the effects of non-uniform learning rates, which causes a distortion of the vector field that can alter which equilibrium the agents converge to and the path they take. We support the analysis with numerical examples that provide insight into how one might synthesize games to achieve desired equilibria.

Additional Information

© 2019 The authors.

Attached Files

Published - 339.pdf

Supplemental Material - chasnov20a-supp.pdf

Files

339.pdf
Files (1.0 MB)
Name Size Download all
md5:1d4a6324c81a9ccb02c74a16157f86ad
817.2 kB Preview Download
md5:a5af75f2f37a039455df013206bf8fbc
218.7 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 23, 2023