Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 2018 | Supplemental Material + Submitted + Published
Journal Article Open

Iterative Amortized Inference

Abstract

Inference models are a key component in scaling variational inference to deep latent variable models, most notably as encoder networks in variational auto-encoders (VAEs). By replacing conventional optimization-based inference with a learned model, inference is amortized over data examples and therefore more computationally efficient. However, standard inference models are restricted to direct mappings from data to approximate posterior estimates. The failure of these models to reach fully optimized approximate posterior estimates results in an amortization gap. We aim toward closing this gap by proposing iterative inference models, which learn to perform inference optimization through repeatedly encoding gradients. Our approach generalizes standard inference models in VAEs and provides insight into several empirical findings, including top-down inference techniques. We demonstrate the inference optimization capabilities of iterative inference models and show that they outperform standard inference models on several benchmark data sets of images and text.

Additional Information

© 2018 the author(s). We would like to thank the reviewers as well as Peter Carr, Oisin Mac Aodha, Grant Van Horn, and Matteo Ruggero Ronchi for their insightful feedback. This research was supported in part by JPL PDF 1584398 and NSF 1564330.

Attached Files

Published - marino18a.pdf

Submitted - 1807.09356.pdf

Supplemental Material - marino18a-supp.pdf

Files

marino18a-supp.pdf
Files (9.8 MB)
Name Size Download all
md5:be7f1a554f1470fdec83ba21cf43dc07
2.1 MB Preview Download
md5:606ba7736129eefb73ee70367b4c0bf3
4.9 MB Preview Download
md5:568326953be22dd8da642ae3a1899dee
2.8 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023