Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 2013 | Submitted + Accepted Version
Book Section - Chapter Open

A Lazy Man's Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration

Abstract

How many labeled examples are needed to estimate a classifier's performance on a new dataset? We study the case where data is plentiful, but labels are expensive. We show that by making a few reasonable assumptions on the structure of the data, it is possible to estimate performance curves, with confidence bounds, using a small number of ground truth labels. Our approach, which we call Semisupervised Performance Evaluation (SPE), is based on a generative model for the classifier's confidence scores. In addition to estimating the performance of classifiers on new datasets, SPE can be used to recalibrate a classifier by reestimating the class-conditional confidence distributions.

Additional Information

© 2013 IEEE. This work was supported by National Science Foundation grants 0914783 and 1216045, NASA Stennis grant NAS7.03001, ONR MURI grant N00014-10-1-0933, and gifts from Qualcomm and Google.

Attached Files

Accepted Version - CVPR2013.pdf

Submitted - 1210.2162.pdf

Files

CVPR2013.pdf
Files (2.5 MB)
Name Size Download all
md5:0256246e6548ae78c801495e0aa38146
1.5 MB Preview Download
md5:74a61eb1f4ef64f5167fd80bbe84f1ea
1.1 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 24, 2023