Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 8, 2014 | Accepted Version
Conference Paper Open

Provable Methods for Training Neural Networks with Sparse Connectivity

Abstract

We provide novel guaranteed approaches for training feedforward neural networks with sparse connectivity. We leverage on the techniques developed previously for learning linear networks and show that they can also be effectively adopted to learn non-linear networks. We operate on the moments involving label and the score function of the input, and show that their factorization provably yields the weight matrix of the first layer of a deep network under mild conditions. In practice, the output of our method can be employed as effective initializers for gradient descent.

Additional Information

A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF Career award CCF-1254106, NSF Award CCF-1219234, ARO YIP Award W911NF-13-1-0084 and ONR Award N00014-14-1-0665. H. Sedghi is supported by ONR Award N00014-14-1-0665.

Attached Files

Accepted Version - 1412.2693.pdf

Files

1412.2693.pdf
Files (152.1 kB)
Name Size Download all
md5:9e181cf19d1a52cbdb979ae8388a401a
152.1 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 20, 2023