Memory bounded inference in topic models
- Creators
- Gomes, Ryan
- Welling, Max
-
Perona, Pietro
- Other:
- Cohen, William
Abstract
What type of algorithms and statistical techniques support learning from very large datasets over long stretches of time? We address this question through a memory bounded version of a variational EM algorithm that approximates inference in a topic model. The algorithm alternates two phases: "model building" and "model compression" in order to always satisfy a given memory constraint. The model building phase expands its internal representation (the number of topics) as more data arrives through Bayesian model selection. Compression is achieved by merging data-items in clumps and only caching their sufficient statistics. Empirically, the resulting algorithm is able to handle datasets that are orders of magnitude larger than the standard batch version.
Additional Information
Copyright 2008 by the author(s)/owner(s). We thank the anonymous reviewers for their helpful comments. This material is based on work supported by the National Science Foundation under grant numbers 0447903 and 0535278, the Office of Naval Research under grant numbers 00014-06-1-0734 and 00014-06-1-0795, and The National Institutes of Health Predoctoral Training in Integrative Neuroscience grant number T32 GM007737.Attached Files
Published - p344-gomes.pdf
Files
Name | Size | Download all |
---|---|---|
md5:cb17bb2ba9930eb76c98256844765057
|
317.2 kB | Preview Download |
Additional details
- Eprint ID
- 71520
- Resolver ID
- CaltechAUTHORS:20161026-173114406
- NSF
- IIS-0447903
- NSF
- IIS-0535278
- Office of Naval Research (ONR)
- N00014-06-1-0734
- Office of Naval Research
- N00014-06-1-0795
- NIH Predoctoral Fellowship
- T32 GM007737
- Created
-
2016-10-27Created from EPrint's datestamp field
- Updated
-
2021-11-11Created from EPrint's last_modified field