Published December 8, 2014
| Accepted Version
Conference Paper
Open
Provable Methods for Training Neural Networks with Sparse Connectivity
- Creators
- Sedghi, Hanie
- Anandkumar, Anima
Abstract
We provide novel guaranteed approaches for training feedforward neural networks with sparse connectivity. We leverage on the techniques developed previously for learning linear networks and show that they can also be effectively adopted to learn non-linear networks. We operate on the moments involving label and the score function of the input, and show that their factorization provably yields the weight matrix of the first layer of a deep network under mild conditions. In practice, the output of our method can be employed as effective initializers for gradient descent.
Additional Information
A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF Career award CCF-1254106, NSF Award CCF-1219234, ARO YIP Award W911NF-13-1-0084 and ONR Award N00014-14-1-0665. H. Sedghi is supported by ONR Award N00014-14-1-0665.Attached Files
Accepted Version - 1412.2693.pdf
Files
1412.2693.pdf
Files
(152.1 kB)
Name | Size | Download all |
---|---|---|
md5:9e181cf19d1a52cbdb979ae8388a401a
|
152.1 kB | Preview Download |
Additional details
- Eprint ID
- 94343
- Resolver ID
- CaltechAUTHORS:20190401-162914714
- Microsoft Faculty Fellowship
- CCF-1254106
- NSF
- CCF-1219234
- NSF
- W911NF-13-1-0084
- Army Research Office (ARO)
- N00014-14-1-0665
- Office of Naval Research (ONR)
- Created
-
2019-04-03Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field