Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 2019 | Submitted + Published + Supplemental Material
Book Section - Chapter Open

Competitive Gradient Descent

Abstract

We introduce a new algorithm for the numerical computation of Nash equilibria of competitive two-player games. Our method is a natural generalization of gradient descent to the two-player setting where the update is given by the Nash equilibrium of a regularized bilinear local approximation of the underlying game. It avoids oscillatory and divergent behaviors seen in alternating gradient descent. Using numerical experiments and rigorous analysis, we provide a detailed comparison to methods based on optimism and consensus and show that our method avoids making any unnecessary changes to the gradient dynamics while achieving exponential (local) convergence for (locally) convex-concave zero sum games. Convergence and stability properties of our method are robust to strong interactions between the players, without adapting the stepsize, which is not the case with previous methods. In our numerical experiments on non-convex-concave problems, existing methods are prone to divergence and instability due to their sensitivity to interactions among the players, whereas we never observe divergence of our algorithm. The ability to choose larger stepsizes furthermore allows our algorithm to achieve faster convergence, as measured by the number of model evaluations.

Additional Information

© 2019 Neural Information Processing Systems Foundation, Inc. A. Anandkumar is supported in part by Bren endowed chair, Darpa PAI, Raytheon, and Microsoft, Google and Adobe faculty fellowships. F. Schäfer gratefully acknowledges support by the Air Force Office of Scientific Research under award number FA9550-18-1-0271 (Games for Computation and Learning) and by Amazon AWS under the Caltech Amazon Fellows program. We thank the reviewers for their constructive feedback, which has helped us improve the paper.

Attached Files

Published - 8979-competitive-gradient-descent.pdf

Submitted - 1905.12103.pdf

Supplemental Material - 8979-competitive-gradient-descent-supplemental.zip

Files

8979-competitive-gradient-descent-supplemental.zip
Files (102.0 MB)
Name Size Download all
md5:0ab34146926d0103130dc346595330d1
99.7 MB Preview Download
md5:a7615f63552f2eb4e22cea78d4e1462c
1.0 MB Preview Download
md5:283e411604a99266a6b0e50b00ee72f7
1.3 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023