Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published 2013 | Supplemental Material + Published
Journal Article Open

Quickly Boosting Decision Trees - Pruning Underachieving Features Early

Abstract

Boosted decision trees are one of the most popular and successful learning techniques used today. While exhibiting fast speeds at test time, relatively slow training makes them impractical for applications with real-time learning requirements. We propose a principled approach to overcome this drawback. We prove a bound on the error of a decision stump given its preliminary error on a subset of the training data; the bound may be used to prune unpromising features early on in the training process. We propose a fast training algorithm that exploits this bound, yielding speedups of an order of magnitude at no cost in the final performance of the classifier. Our method is not a new variant of Boosting; rather, it may be used in conjunction with existing Boosting algorithms and other sampling heuristics to achieve even greater speedups.

Additional Information

Copyright 2013 by the author(s). This work was supported by NSERC 420456-2012, MURI-ONR N00014-10-1-0933, ARO/JPL-NASA Stennis NAS7.03001, and Moore Foundation.

Attached Files

Published - Appel_QuicklyBoostingDecisionTrees.pdf

Supplemental Material - appel13-supp.pdf

Files

appel13-supp.pdf
Files (1.4 MB)
Name Size Download all
md5:0edda4e6d69cf2d99143c9078d5d65a7
160.9 kB Preview Download
md5:0540d86bd913257b36023bc8c02fb53a
1.2 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 25, 2023