R.I.P.
๐ป
Ghosted
Improving Fairness in Graph Neural Networks via Mitigating Sensitive Attribute Leakage
June 07, 2022 ยท Entered Twilight ยท ๐ Knowledge Discovery and Data Mining
Repo contents: README.md, dataset.py, dataset, fairvgnn.py, fairvgnn_credit.py, img, learn.py, model.py, prelim, requirements.txt, run_adv.sh, run_adwc.sh, run_bail.sh, run_credit.sh, run_german.sh, source.py, utils.py
Authors
Yu Wang, Yuying Zhao, Yushun Dong, Huiyuan Chen, Jundong Li, Tyler Derr
arXiv ID
2206.03426
Category
cs.LG: Machine Learning
Cross-listed
cs.CR,
cs.CY
Citations
113
Venue
Knowledge Discovery and Data Mining
Repository
https://github.com/YuWVandy/FairVGNN
โญ 27
Last Checked
1 month ago
Abstract
Graph Neural Networks (GNNs) have shown great power in learning node representations on graphs. However, they may inherit historical prejudices from training data, leading to discriminatory bias in predictions. Although some work has developed fair GNNs, most of them directly borrow fair representation learning techniques from non-graph domains without considering the potential problem of sensitive attribute leakage caused by feature propagation in GNNs. However, we empirically observe that feature propagation could vary the correlation of previously innocuous non-sensitive features to the sensitive ones. This can be viewed as a leakage of sensitive information which could further exacerbate discrimination in predictions. Thus, we design two feature masking strategies according to feature correlations to highlight the importance of considering feature propagation and correlation variation in alleviating discrimination. Motivated by our analysis, we propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features considering correlation variation after feature propagation. Given the learned fair views, we adaptively clamp weights of the encoder to avoid using sensitive-related features. Experiments on real-world datasets demonstrate that FairVGNN enjoys a better trade-off between model utility and fairness. Our code is publicly available at https://github.com/YuWVandy/FairVGNN.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted