Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 12, 2014 | Submitted + Published
Conference Paper Open

Provable Methods for Training Neural Networks with Sparse Connectivity

Abstract

We provide novel guaranteed approaches for training feedforward neural networks with sparse connectivity. We leverage on the techniques developed previously for learning linear networks and show that they can also be effectively adopted to learn non-linear networks. We operate on the moments involving label and the score function of the input, and show that their factorization provably yields the weight matrix of the first layer of a deep network under mild conditions. In practice, the output of our method can be employed as effective initializers for gradient descent.

Attached Files

Published - 66.pdf

Submitted - 1412.2693v4.pdf

Files

1412.2693v4.pdf
Files (404.2 kB)
Name Size Download all
md5:9e181cf19d1a52cbdb979ae8388a401a
152.1 kB Preview Download
md5:3fb3c77bfc5ade52b18eba3ca57d21ac
252.1 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 20, 2023