Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 27, 2019 | Submitted
Report Open

Penalizing Unfairness in Binary Classification

Abstract

We present a new approach for mitigating unfairness in learned classifiers. In particular, we focus on binary classification tasks over individuals from two populations, where, as our criterion for fairness, we wish to achieve similar false positive rates in both populations, and similar false negative rates in both populations. As a proof of concept, we implement our approach and empirically evaluate its ability to achieve both fairness and accuracy, using datasets from the fields of criminal risk assessment, credit, lending, and college admissions.

Additional Information

This work was supported in part by NSF grants CNS-1254169 and CNS-1518941, US-Israel Binational Science Foundation grant 2012348, Israeli Science Foundation (ISF) grant #1044/16, a subcontract on the DARPA Brandeis Project, and the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister's Office.

Attached Files

Submitted - 1707.00044.pdf

Files

1707.00044.pdf
Files (512.4 kB)
Name Size Download all
md5:fde30cb4d4c74d94994bccc0a67ec47e
512.4 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023