Provable Tensor Methods for Learning Mixtures of Generalized Linear Models
Abstract
We consider the problem of learning mixtures of generalized linear models (GLM) which arise in classification and regression problems. Typical learning approaches such as expectation maximization (EM) or variational Bayes can get stuck in spurious local optima. In contrast, we present a tensor decomposition method which is guaranteed to correctly recover the parameters. The key insight is to employ certain feature transformations of the input, which depend on the input generative model. Specifically, we employ score function tensors of the input and compute their cross-correlation with the response variable. We establish that the decomposition of this tensor consistently recovers the parameters, under mild non-degeneracy conditions. We demonstrate that the computational and sample complexity of our method is a low order polynomial of the input and the latent dimensions.
Additional Information
© 2016 by the authors. This work was done while H. Sedghi was a visiting researcher at UC Irvine and was supported by NSF Career award FG15890. M. Janzamin is supported by NSF BIGDATA award FG16455. A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF Career award CCF-1254106, and ONR Award N00014-14-1-0665.Attached Files
Published - sedghi16.pdf
Accepted Version - 1412.3046.pdf
Supplemental Material - sedghi16-supp.pdf
Files
Additional details
- Eprint ID
- 94345
- Resolver ID
- CaltechAUTHORS:20190401-162921773
- NSF
- FG15890
- NSF
- FG16455
- Microsoft Faculty Fellowship
- NSF
- CCF-1254106
- Office of Naval Research (ONR)
- N00014-14-1-0665
- Created
-
2019-04-03Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field