Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 2006 | Submitted
Journal Article Open

Optimal Scaling of a Gradient Method for Distributed Resource Allocation

Abstract

We consider a class of weighted gradient methods for distributed resource allocation over a network. Each node of the network is associated with a local variable and a convex cost function; the sum of the variables (resources) across the network is fixed. Starting with a feasible allocation, each node updates its local variable in proportion to the differences between the marginal costs of itself and its neighbors. We focus on how to choose the proportional weights on the edges (scaling factors for the gradient method) to make this distributed algorithm converge and on how to make the convergence as fast as possible. We give sufficient conditions on the edge weights for the algorithm to converge monotonically to the optimal solution; these conditions have the form of a linear matrix inequality. We give some simple, explicit methods to choose the weights that satisfy these conditions. We derive a guaranteed convergence rate for the algorithm and find the weights that minimize this rate by solving a semidefinite program. Finally, we extend the main results to problems with general equality constraints and problems with block separable objective function.

Additional Information

© 2006 Springer Science+Business Media, Inc. Published Online: 29 November 2006. Communicated by P. Tseng. The authors are grateful to Professor Paul Tseng and the anonymous referee for their valuable comments that helped us to improve the presentation of this paper.

Attached Files

Submitted - XIAjota06preprint.pdf

Files

XIAjota06preprint.pdf
Files (217.5 kB)
Name Size Download all
md5:6ab8405d74a0b65e90d4b021a12e76f7
217.5 kB Preview Download

Additional details

Created:
August 22, 2023
Modified:
October 23, 2023