Differentially Private Post-Processing for Fair Regression

May 07, 2024 ยท Entered Twilight ยท ๐Ÿ› International Conference on Machine Learning

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: LICENSE, README.md, communities.ipynb, law.ipynb, postprocess.py, requirements.txt

Authors Ruicheng Xian, Qiaobo Li, Gautam Kamath, Han Zhao arXiv ID 2405.04034 Category cs.LG: Machine Learning Cross-listed cs.CR, cs.CY Citations 9 Venue International Conference on Machine Learning Repository https://github.com/rxian/fair-regression โญ 2 Last Checked 1 month ago
Abstract
This paper describes a differentially private post-processing algorithm for learning fair regressors satisfying statistical parity, addressing privacy concerns of machine learning models trained on sensitive data, as well as fairness concerns of their potential to propagate historical biases. Our algorithm can be applied to post-process any given regressor to improve fairness by remapping its outputs. It consists of three steps: first, the output distributions are estimated privately via histogram density estimation and the Laplace mechanism, then their Wasserstein barycenter is computed, and the optimal transports to the barycenter are used for post-processing to satisfy fairness. We analyze the sample complexity of our algorithm and provide fairness guarantee, revealing a trade-off between the statistical bias and variance induced from the choice of the number of bins in the histogram, in which using less bins always favors fairness at the expense of error.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning