Khan-GCL: Kolmogorov-Arnold Network Based Graph Contrastive Learning with Hard Negatives

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional graph contrastive learning (GCL) suffers from two key bottlenecks: limited expressiveness of MLP-based encoders and semantically uninformative negative samples—random augmentations rarely yield “hard negatives,” while existing hard-negative sampling strategies ignore structural semantic disparities. To address these, this work introduces the Kolmogorov–Arnold Network (KAN) as the GCL encoder for the first time, leveraging its superior nonlinear modeling capacity and interpretable learnable coefficients. Building upon KAN’s coefficient structure, we propose a dual-path key feature identification mechanism that explicitly captures semantic distinctions across graphs, enabling semantics-driven hard negative generation. Evaluated on multiple benchmark datasets, our method achieves state-of-the-art performance in both graph classification and cross-domain transfer tasks, with substantial gains in generalization capability and model interpretability.

Technology Category

Application Category

📝 Abstract
Graph contrastive learning (GCL) has demonstrated great promise for learning generalizable graph representations from unlabeled data. However, conventional GCL approaches face two critical limitations: (1) the restricted expressive capacity of multilayer perceptron (MLP) based encoders, and (2) suboptimal negative samples that either from random augmentations-failing to provide effective 'hard negatives'-or generated hard negatives without addressing the semantic distinctions crucial for discriminating graph data. To this end, we propose Khan-GCL, a novel framework that integrates the Kolmogorov-Arnold Network (KAN) into the GCL encoder architecture, substantially enhancing its representational capacity. Furthermore, we exploit the rich information embedded within KAN coefficient parameters to develop two novel critical feature identification techniques that enable the generation of semantically meaningful hard negative samples for each graph representation. These strategically constructed hard negatives guide the encoder to learn more discriminative features by emphasizing critical semantic differences between graphs. Extensive experiments demonstrate that our approach achieves state-of-the-art performance compared to existing GCL methods across a variety of datasets and tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing GCL encoder capacity beyond MLP limitations
Generating semantically meaningful hard negative samples
Improving graph representation discriminative feature learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Kolmogorov-Arnold Network for GCL encoder
Develops feature identification for hard negatives
Enhances semantic discrimination in graph representations
🔎 Similar Papers
No similar papers found.
Z
Zihu Wang
Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106
Boxun Xu
Boxun Xu
University of California, Santa Barbara
Brain-inspired MLComputer ArchitectureEfficient AIHW/SW Co-designGenerative AI
Hejia Geng
Hejia Geng
Researcher @ Oxford
P
Peng Li
Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106