Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 2018 | Submitted
Book Section - Chapter Open

Improving Distributed Gradient Descent Using Reed-Solomon Codes

Abstract

Today's massively-sized datasets have made it necessary to often perform computations on them in a distributed manner. In principle, a computational task is divided into subtasks which are distributed over a cluster operated by a taskmaster. One issue faced in practice is the delay incurred due to the presence of slow machines, known as stragglers. Several schemes, including those based on replication, have been proposed in the literature to mitigate the effects of stragglers and more recently, those inspired by coding theory have begun to gain traction. In this work, we consider a distributed gradient descent setting suitable for a wide class of machine learning problems. We adopt the framework of Tandon et al. [1] and present a deterministic scheme that, for a prescribed per-machine computational effort, recovers the gradient from the least number of machines f theoretically permissible, via an O(f^2) decoding algorithm. The idea is based on a suitably designed Reed-Solomon code that has a sparsest and balanced generator matrix. We also provide a theoretical delay model which can be used to minimize the expected waiting time per computation by optimally choosing the parameters of the scheme. Finally, we supplement our theoretical findings with numerical results that demonstrate the efficacy of the method and its advantages over competing schemes.

Additional Information

© 2018 IEEE.

Attached Files

Submitted - 1706.05436.pdf

Files

1706.05436.pdf
Files (311.6 kB)
Name Size Download all
md5:4ac2edb9404211caa4acae07658589c3
311.6 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
March 5, 2024