Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes

October 06, 2022 ยท Entered Twilight ยท ๐Ÿ› International Conference on Machine Learning

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitattributes, .gitignore, COMPAS, README.md, celeba, eval_celeba.sh, eval_compas.sh, fair_eval.py, fair_eval_celeba.py, hoc.py, result, table_celeba.py, table_compas.py

Authors Zhaowei Zhu, Yuanshun Yao, Jiankai Sun, Hang Li, Yang Liu arXiv ID 2210.03175 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.CY Citations 27 Venue International Conference on Machine Learning Repository https://github.com/UCSC-REAL/fair-eval โญ 7 Last Checked 1 month ago
Abstract
Evaluating fairness can be challenging in practice because the sensitive attributes of data are often inaccessible due to privacy constraints. The go-to approach that the industry frequently adopts is using off-the-shelf proxy models to predict the missing sensitive attributes, e.g. Meta [Alao et al., 2021] and Twitter [Belli et al., 2022]. Despite its popularity, there are three important questions unanswered: (1) Is directly using proxies efficacious in measuring fairness? (2) If not, is it possible to accurately evaluate fairness using proxies only? (3) Given the ethical controversy over inferring user private information, is it possible to only use weak (i.e. inaccurate) proxies in order to protect privacy? Our theoretical analyses show that directly using proxy models can give a false sense of (un)fairness. Second, we develop an algorithm that is able to measure fairness (provably) accurately with only three properly identified proxies. Third, we show that our algorithm allows the use of only weak proxies (e.g. with only 68.85%accuracy on COMPAS), adding an extra layer of protection on user privacy. Experiments validate our theoretical analyses and show our algorithm can effectively measure and mitigate bias. Our results imply a set of practical guidelines for practitioners on how to use proxies properly. Code is available at github.com/UCSC-REAL/fair-eval.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning