🤖 AI Summary
Random masking in graph self-supervised learning often introduces task-irrelevant redundancy and discards discriminative structural information. Method: We propose the first conditional independence (CI)-guided latent-space masking paradigm, introducing CI into unsupervised graph masking design for the first time. Our approach achieves structural disentanglement via CI-aware latent factor decomposition and leverages high-confidence pseudo-labels from unsupervised graph clustering to drive dual-context reconstruction. Theoretically, we analyze the linear separability of learned representations. Results: Evaluated on multiple graph benchmarks across node classification and link prediction tasks, our method achieves significantly superior average ranking over state-of-the-art approaches, empirically validating that CI-guided masking enhances both discriminability and generalizability of graph representations.
📝 Abstract
Recent Self-Supervised Learning (SSL) methods encapsulating relational information via masking in Graph Neural Networks (GNNs) have shown promising performance. However, most existing approaches rely on random masking strategies in either feature or graph space, which may fail to capture task-relevant information fully. We posit that this limitation stems from an inability to achieve minimum redundancy between masked and unmasked components while ensuring maximum relevance of both to potential downstream tasks. Conditional Independence (CI) inherently satisfies the minimum redundancy and maximum relevance criteria, but its application typically requires access to downstream labels. To address this challenge, we introduce CIMAGE, a novel approach that leverages Conditional Independence to guide an effective masking strategy within the latent space. CIMAGE utilizes CI-aware latent factor decomposition to generate two distinct contexts, leveraging high-confidence pseudo-labels derived from unsupervised graph clustering. In this framework, the pretext task involves reconstructing the masked second context solely from the information provided by the first context. Our theoretical analysis further supports the superiority of CIMAGE's novel CI-aware masking method by demonstrating that the learned embedding exhibits approximate linear separability, which enables accurate predictions for the downstream task. Comprehensive evaluations across diverse graph benchmarks illustrate the advantage of CIMAGE, with notably higher average rankings on node classification and link prediction tasks. Notably, our proposed model highlights the under-explored potential of CI in enhancing graph SSL methodologies and offers enriched insights for effective graph representation learning.