Improving Fairness in Graph Neural Networks via Mitigating Sensitive Attribute Leakage

June 07, 2022 ยท Entered Twilight ยท ๐Ÿ› Knowledge Discovery and Data Mining

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: README.md, dataset.py, dataset, fairvgnn.py, fairvgnn_credit.py, img, learn.py, model.py, prelim, requirements.txt, run_adv.sh, run_adwc.sh, run_bail.sh, run_credit.sh, run_german.sh, source.py, utils.py

Authors Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, Tyler Derr arXiv ID 2206.03426 Category cs.LG: Machine Learning Cross-listed cs.CR, cs.CY Citations 113 Venue Knowledge Discovery and Data Mining Repository https://github.com/YuWVandy/FairVGNN โญ 27 Last Checked 1 month ago
Abstract
Graph Neural Networks (GNNs) have shown great power in learning node representations on graphs. However, they may inherit historical prejudices from training data, leading to discriminatory bias in predictions. Although some work has developed fair GNNs, most of them directly borrow fair representation learning techniques from non-graph domains without considering the potential problem of sensitive attribute leakage caused by feature propagation in GNNs. However, we empirically observe that feature propagation could vary the correlation of previously innocuous non-sensitive features to the sensitive ones. This can be viewed as a leakage of sensitive information which could further exacerbate discrimination in predictions. Thus, we design two feature masking strategies according to feature correlations to highlight the importance of considering feature propagation and correlation variation in alleviating discrimination. Motivated by our analysis, we propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features considering correlation variation after feature propagation. Given the learned fair views, we adaptively clamp weights of the encoder to avoid using sensitive-related features. Experiments on real-world datasets demonstrate that FairVGNN enjoys a better trade-off between model utility and fairness. Our code is publicly available at https://github.com/YuWVandy/FairVGNN.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning