HomoGCL: Rethinking Homophily in Graph Contrastive Learning

June 16, 2023 ยท Entered Twilight ยท ๐Ÿ› Knowledge Discovery and Data Mining

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: args.py, aug.py, dataset.py, main.py, model.py, readme.md, utils.py

Authors Wen-Zhi Li, Chang-Dong Wang, Hui Xiong, Jian-Huang Lai arXiv ID 2306.09614 Category cs.LG: Machine Learning Cross-listed cs.SI Citations 45 Venue Knowledge Discovery and Data Mining Repository https://github.com/wenzhilics/HomoGCL โญ 14 Last Checked 1 month ago
Abstract
Contrastive learning (CL) has become the de-facto learning paradigm in self-supervised learning on graphs, which generally follows the "augmenting-contrasting" learning scheme. However, we observe that unlike CL in computer vision domain, CL in graph domain performs decently even without augmentation. We conduct a systematic analysis of this phenomenon and argue that homophily, i.e., the principle that "like attracts like", plays a key role in the success of graph CL. Inspired to leverage this property explicitly, we propose HomoGCL, a model-agnostic framework to expand the positive set using neighbor nodes with neighbor-specific significances. Theoretically, HomoGCL introduces a stricter lower bound of the mutual information between raw node features and node embeddings in augmented views. Furthermore, HomoGCL can be combined with existing graph CL models in a plug-and-play way with light extra computational overhead. Extensive experiments demonstrate that HomoGCL yields multiple state-of-the-art results across six public datasets and consistently brings notable performance improvements when applied to various graph CL methods. Code is avilable at https://github.com/wenzhilics/HomoGCL.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning