Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published 2014 | Published
Book Section - Chapter Open

Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers

Abstract

We propose a general and versatile framework that significantly speeds-up graphical model optimization while maintaining an excellent solution accuracy. The proposed approach, refereed as Inference by Learning or in short as IbyL, relies on a multi-scale pruning scheme that progressively reduces the solution space by use of a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our novel framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF. We make our code available on-line [4].

Additional Information

© 2014 Neural Information Processing Systems. This work was supported by USGS through the Measurements of surface ruptures produced by continental earthquakes from optical imagery and LiDAR project (USGS Award G13AP00037), the Terrestrial Hazard Observation and Reporting Center of Caltech, and the Moore foundation through the Advanced Earth Surface Observation Project (AESOP Grant 2808).

Attached Files

Published - 5357-inference-by-learning-speeding-up-graphical-model-optimization-via-a-coarse-to-fine-cascade-of-pruning-classifiers.pdf

Files

5357-inference-by-learning-speeding-up-graphical-model-optimization-via-a-coarse-to-fine-cascade-of-pruning-classifiers.pdf

Additional details

Created:
August 22, 2023
Modified:
January 13, 2024