Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published November 2019 | Submitted
Book Section - Chapter Open

Anchor Loss: Modulating Loss Scale Based on Prediction Difficulty

Abstract

We propose a novel loss function that dynamically re-scales the cross entropy based on prediction difficulty regarding a sample. Deep neural network architectures in image classification tasks struggle to disambiguate visually similar objects. Likewise, in human pose estimation symmetric body parts often confuse the network with assigning indiscriminative scores to them. This is due to the output prediction, in which only the highest confidence label is selected without taking into consideration a measure of uncertainty. In this work, we define the prediction difficulty as a relative property coming from the confidence score gap between positive and negative labels. More precisely, the proposed loss function penalizes the network to avoid the score of a false prediction being significant. To demonstrate the efficacy of our loss function, we evaluate it on two different domains: image classification and human pose estimation. We find improvements in both applications by achieving higher accuracy compared to the baseline methods.

Additional Information

© 2019 IEEE. We would like to thank Joseph Marino and Matteo Ruggero Ronchi for their valuable comments. This work was supported by funding from Disney Research.

Attached Files

Submitted - 1909.11155.pdf

Files

1909.11155.pdf
Files (9.3 MB)
Name Size Download all
md5:0b4ca946be53233ba9ca33cd4d028404
9.3 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 19, 2023