Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published May 18, 2021 | Published + Submitted
Book Section - Chapter Open

Robust Fairness Under Covariate Shift

Abstract

Making predictions that are fair with regard to protected attributes (race, gender, age, etc.) has become an important requirement for classification algorithms. Existing techniques derive a fair model from sampled labeled data relying on the assumption that training and testing data are identically and independently drawn (iid) from the same distribution. In practice, distribution shift can and does occur between training and testing datasets as the characteristics of individuals interacting with the machine learning system change. We investigate fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same. We seek fair decisions under these assumptions on target data with unknown labels. We propose an approach that obtains the predictor that is robust to the worst-case testing performance while satisfying target fairness requirements and matching statistical properties of the source data. We demonstrate the benefits of our approach on benchmark prediction tasks.

Additional Information

© 2021 Association for the Advancement of Artificial Intelligence. Published 2021-05-18. This work was supported by the National Science Foundation Program on Fairness in AI in collaboration with Amazon under award No. 1939743.

Attached Files

Published - 17135-Article_Text-20629-1-2-20210518.pdf

Submitted - 2010.05166.pdf

Files

17135-Article_Text-20629-1-2-20210518.pdf
Files (3.8 MB)
Name Size Download all
md5:a6025e981ce17c8aab07bd798ccdf0d1
2.9 MB Preview Download
md5:db91f7d12a5dfd997d75ac14761739d0
834.0 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 23, 2023