Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 2013 | public
Journal Article

Designing Games for Distributed Optimization

Abstract

The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent's control law on the least amount of information possible. This paper focuses on achieving this goal using the field of game theory. In particular, we derive a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting Nash equilibria and the optimizers of the system level objective and (ii) that the resulting game possesses an inherent structure that can be exploited in distributed learning, e.g., potential games. The control design can then be completed utilizing any distributed learning algorithm which guarantees convergence to a Nash equilibrium for the attained game structure. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Additional Information

© 2013 IEEE. Manuscript received August 16, 2012; revised November 30, 2012; accepted January 26, 2013. Date of publication February 11, 2013; date of current version March 09, 2013. This work was supported in part by the Air Force Office of Scientific Research (AFOSR) under Grants FA9550-09-1-0538 and FA9550-12-1-0359 and in part by the Office of Naval Research (ONR) under Grant N00014-12-1-0643. The conference version of this work appeared in [1]. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Isao Yamada.

Additional details

Created:
August 22, 2023
Modified:
October 24, 2023