Cascade-BGNN: Toward Efficient Self-supervised Representation Learning on Large-scale Bipartite Graphs

June 27, 2019 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, ASGCN, BGNN-Adv-loss.txt, BGNN, Figures_for_Experiments.ipynb, GraphSage, Node2Vec, README.md, classifier, data, deeperLayerCora.log, docs, gae.log, gae, layer_influence.log, log_run_abcgraph_adv_tencent.txt, requirements.txt, run.sh, run_bgnn_adv.sh, run_bgnn_mlp.sh, run_gae.sh, run_gcn.sh, run_graphsage.sh, run_node2vec.sh, run_pure_feature.sh, test, visualization.py

Authors Chaoyang He, Tian Xie, Yu Rong, Wenbing Huang, Junzhou Huang, Xiang Ren, Cyrus Shahabi arXiv ID 1906.11994 Category cs.SI: Social & Info Networks Cross-listed cs.AI, cs.LG, stat.ML Citations 5 Venue arXiv.org Repository https://github.com/chaoyanghe/bipartite-graph-learning โญ 31 Last Checked 1 month ago
Abstract
Bipartite graphs have been used to represent data relationships in many data-mining applications such as in E-commerce recommendation systems. Since learning in graph space is more complicated than in Euclidian space, recent studies have extensively utilized neural nets to effectively and efficiently embed a graph's nodes into a multidimensional space. However, this embedding method has not yet been applied to large-scale bipartite graphs. Existing techniques either cannot be scaled to large-scale bipartite graphs that have limited labels or cannot exploit the unique structure of bipartite graphs, which have distinct node features in two domains. Thus, we propose Cascade Bipartite Graph Neural Networks, Cascade-BGNN, a novel node representation learning for bipartite graphs that is domain-consistent, self-supervised, and efficient. To efficiently aggregate information both across and within the two partitions of a bipartite graph, BGNN utilizes a customized Inter-domain Message Passing (IDMP) and Intra-domain Alignment (IDA), which is our adaptation of adversarial learning, for message aggregation across and within partitions, respectively. BGNN is trained in a self-supervised manner. Moreover, we formulate a multi-layer BGNN in a cascaded training manner to enable multi-hop relationship modeling while improving training efficiency. Extensive experiments on several datasets of varying scales verify the effectiveness and efficiency of BGNN over baselines. Our design is further affirmed through theoretical analysis for domain alignment. The scalability of BGNN is additionally verified through its demonstrated rapid training speed and low memory cost over a large-scale real-world bipartite graph.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Social & Info Networks