Fast Distributionally Robust Learning with Variance Reduced Min-Max Optimization
Abstract
Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for building reliable machine learning systems for real-world applications -- reflecting the need for classifiers and predictive models that are robust to the distribution shifts that arise from phenomena such as selection bias or nonstationarity. Existing algorithms for solving Wasserstein DRSL -- one of the most popular DRSL frameworks based around robustness to perturbations in the Wasserstein distance -- involve solving complex subproblems or fail to make use of stochastic gradients, limiting their use in large-scale machine learning problems. We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable stochastic extra-gradient algorithms which provably achieve faster convergence rates than existing approaches. We demonstrate their effectiveness on synthetic and real data when compared to existing DRSL approaches. Key to our results is the use of variance reduction and random reshuffling to accelerate stochastic min-max optimization, the analysis of which may be of independent interest.
Additional Information
Yaodong Yu, Tianyi Lin and Eric Mazumdar contributed equally to this work. This work was supported in part by the Mathematical Data Science program of the Office of Naval Research under grant number N00014-18-1-2764.Attached Files
Submitted - 2104.13326.pdf
Files
Name | Size | Download all |
---|---|---|
md5:5afb7287ca894b93cc35885e1388b48f
|
1.9 MB | Preview Download |
Additional details
- Eprint ID
- 110724
- Resolver ID
- CaltechAUTHORS:20210903-213710817
- Office of Naval Research (ONR)
- N00014-18-1-2764
- Created
-
2021-09-07Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field