Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published November 2019 | Submitted
Book Section - Chapter Open

Federated Learning with Autotuned Communication-Efficient Secure Aggregation

Abstract

Federated Learning enables mobile devices to collaboratively learn a shared inference model while keeping all the training data on a user's device, decoupling the ability to do machine learning from the need to store the data in the cloud. Existing work on federated learning with limited communication demonstrates how random rotation can enable users' model updates to be quantized much more efficiently, reducing the communication cost between users and the server. Meanwhile, secure aggregation enables the server to learn an aggregate of at least a threshold number of device's model contributions without observing any individual device's contribution in unaggregated form. In this paper, we highlight some of the challenges of setting the parameters for secure aggregation to achieve communication efficiency, especially in the context of the aggressively quantized inputs enabled by random rotation. We then develop a recipe for auto-tuning communication-efficient secure aggregation, based on specific properties of random rotation and secure aggregation – namely, the predictable distribution of vector entries post-rotation and the modular wrapping inherent in secure aggregation. We present both theoretical results and initial experiments.

Additional Information

© 2019 IEEE.

Attached Files

Submitted - 1912.00131.pdf

Files

1912.00131.pdf
Files (2.1 MB)
Name Size Download all
md5:42ecffdd24323b4cf22d6c357c056d2f
2.1 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023