Published December 2020 | public
Book Section - Chapter

On the distance between two neural networks and the stability of learning

An error occurred while generating the citation.

Abstract

This paper relates parameter distance to gradient breakdown for a broad class of nonlinear compositional functions. The analysis leads to a new distance function called deep relative trust and a descent lemma for neural networks. Since the resulting learning rule seems to require little to no learning rate tuning, it may unlock a simpler workflow for training deeper and more complex neural networks. The Python code used in this paper is here: https://github.com/jxbz/fromage.

Additional Information

The authors would like to thank Rumen Dangovski, Dillon Huff, Jeffrey Pennington, Florian Schaefer and Joel Tropp for useful conversations. They made heavy use of a codebase built by Jiahui Yu. They are grateful to Sivakumar Arayandi Thottakara, Jan Kautz, Sabu Nadarajan and Nithya Natesan for infrastructure support. JB was supported by an NVIDIA fellowship, and this work was funded in part by NASA.

Additional details

Created:
August 20, 2023
Modified:
October 24, 2023