Masked Graph Modeling with Multi-View Contrast

Abstract

Masked modeling has recently achieved remarkable success in specific fields of vision and language, sparking a surge of interest in graph-related research. However, Masked Graph Modeling (MGM), which captures fine-grained local information by masking low-level elements such as nodes, edges, and features, limits itself to a sub-optimal position, particularly on tasks requiring high-quality graph-level representations. Such a local perspective disregards the graph’s global information and structure. To address these limitations, we propose a novel graph pre-training framework called Graph Contrastive Masked Autoencoder (GCMAE). GCMAE leverages the strengths of both MGM and Graph Contrastive Learning (GCL) to provide a more comprehensive perspective of both local and global. Our framework uses instance discrimination to learn global representations of graphs and reconstructs the graph using masked low-level elements. We augment the framework with a novel multi-view augmentation module to further enhance the pre-trained model’s robustness and generalization ability. We evaluate GCMAE on real-world biochemistry and social network datasets, conducting extensive experiments on both node and graph classification tasks and transfer learning on downstream graph classification tasks. Our experimental results demonstrate that GCMAE’s comprehensive perspective of both local and global benefits model pre-training. Moreover, GCMAE outperforms existing MGM and GCL baselines, proving its effectiveness on downstream tasks.

Publication
In ICDE 2024

Citation:

@inproceedings{GCMAE,
  title = {Masked Graph Modeling with Multi-View Contrast},
  author = {Yanchen Luo and Sihang Li and Yongduo Sui and Junkang Wu and Jiancan Wu and Xiang Wang},
  booktitle = {ICDE},
  year = {2024}
}
Yanchen Luo
Yanchen Luo
罗晏宸
Sihang Li
Sihang Li
李思杭
Yongduo Sui
Yongduo Sui
隋勇铎
Junkang Wu
Junkang Wu
吴俊康
Jiancan Wu
Jiancan Wu
吴剑灿 博士后
Xiang Wang
Xiang Wang
王翔 教授