🤖 AI Summary
Node classification on real-world graphs—critical for high-stakes applications such as human trafficking detection and misinformation monitoring—is severely hindered by label scarcity and pervasive label noise. To address this, we propose WSNET, the first contrastive learning framework tailored for weakly supervised graph learning. WSNET unifies the modeling of multi-source noisy labels, graph topology, and node features, and introduces a novel contrastive objective explicitly designed for weak supervision. By integrating graph neural networks with noise-robust representation learning, WSNET significantly enhances model generalizability and robustness under low-quality annotations. Extensive experiments on three real-world graph datasets and synthetic noise benchmarks demonstrate that WSNET achieves up to a 15% absolute improvement in F1-score over state-of-the-art methods, validating its effectiveness and practical utility in weakly supervised graph learning.
📝 Abstract
Node classification in real world graphs often suffers from label scarcity and noise, especially in high stakes domains like human trafficking detection and misinformation monitoring. While direct supervision is limited, such graphs frequently contain weak signals, noisy or indirect cues, that can still inform learning. We propose WSNET, a novel weakly supervised graph contrastive learning framework that leverages these weak signals to guide robust representation learning. WSNET integrates graph structure, node features, and multiple noisy supervision sources through a contrastive objective tailored for weakly labeled data. Across three real world datasets and synthetic benchmarks with controlled noise, WSNET consistently outperforms state of the art contrastive and noisy label learning methods by up to 15% in F1 score. Our results highlight the effectiveness of contrastive learning under weak supervision and the promise of exploiting imperfect labels in graph based settings.