Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 2020 | Supplemental Material + Published
Book Section - Chapter Open

Learning compositional functions via multiplicative weight updates

Abstract

Compositionality is a basic structural feature of both biological and artificial neural networks. Learning compositional functions via gradient descent incurs well known problems like vanishing and exploding gradients, making careful learning rate tuning essential for real-world applications. This paper proves that multiplicative weight updates satisfy a descent lemma tailored to compositional functions. Based on this lemma, we derive Madam—a multiplicative version of the Adam optimiser—and show that it can train state of the art neural network architectures without learning rate tuning. We further show that Madam is easily adapted to train natively compressed neural networks by representing their weights in a logarithmic number system. We conclude by drawing connections between multiplicative weight updates and recent findings about synapses in biology.

Additional Information

The authors would like to thank the anonymous reviewers for their helpful comments. JB was supported by an NVIDIA fellowship. The work was partly supported by funding from NASA.

Attached Files

Published - NeurIPS-2020-learning-compositional-functions-via-multiplicative-weight-updates-Paper.pdf

Supplemental Material - NeurIPS-2020-learning-compositional-functions-via-multiplicative-weight-updates-Supplemental.pdf

Files

NeurIPS-2020-learning-compositional-functions-via-multiplicative-weight-updates-Paper.pdf

Additional details

Created:
August 20, 2023
Modified:
December 22, 2023