From Noisy to Native: LLM-driven Graph Restoration for Test-Time Graph Domain Adaptation

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Test-time graph domain adaptation (GDA) faces the practical challenge of inaccessible source-domain data, hindering conventional knowledge transfer. Method: This paper proposes the first generative graph restoration framework that inversely reconstructs the target graph structure into a source-like state—enabling source-free knowledge transfer. It reformulates GDA as a graph structural recovery problem, integrating node representation compression, graph diffusion modeling, and quantized encoding, and—novelty—introduces large language model (LLM)-driven graph generation. A dual-objective reinforcement learning strategy, guided by alignment quality and prediction confidence, fine-tunes the LLM. Results: Extensive experiments on multiple benchmark datasets demonstrate that our method significantly outperforms existing source-free GDA approaches, validating the effectiveness and generalizability of the generative paradigm for graph domain adaptation.

Technology Category

Application Category

📝 Abstract
Graph domain adaptation (GDA) has achieved great attention due to its effectiveness in addressing the domain shift between train and test data. A significant bottleneck in existing graph domain adaptation methods is their reliance on source-domain data, which is often unavailable due to privacy or security concerns. This limitation has driven the development of Test-Time Graph Domain Adaptation (TT-GDA), which aims to transfer knowledge without accessing the source examples. Inspired by the generative power of large language models (LLMs), we introduce a novel framework that reframes TT-GDA as a generative graph restoration problem, "restoring the target graph to its pristine, source-domain-like state". There are two key challenges: (1) We need to construct a reasonable graph restoration process and design an effective encoding scheme that an LLM can understand, bridging the modality gap. (2) We need to devise a mechanism to ensure the restored graph acquires the intrinsic features of the source domain, even without access to the source data. To ensure the effectiveness of graph restoration, we propose GRAIL, that restores the target graph into a state that is well-aligned with the source domain. Specifically, we first compress the node representations into compact latent features and then use a graph diffusion process to model the graph restoration process. Then a quantization module encodes the restored features into discrete tokens. Building on this, an LLM is fine-tuned as a generative restorer to transform a "noisy" target graph into a "native" one. To further improve restoration quality, we introduce a reinforcement learning process guided by specialized alignment and confidence rewards. Extensive experiments demonstrate the effectiveness of our approach across various datasets.
Problem

Research questions and friction points this paper is trying to address.

Restoring target graphs to source-domain-like states
Bridging modality gaps for LLM graph understanding
Achieving domain alignment without source data access
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven generative graph restoration for domain adaptation
Graph diffusion and quantization for feature encoding
Reinforcement learning with alignment and confidence rewards
🔎 Similar Papers
No similar papers found.
X
Xiangwei Lv
Zhejiang University, Hangzhou, China
J
JinLuan Yang
Zhejiang University, Hangzhou, China
Wang Lin
Wang Lin
Zhejiang University
Computer VisionMulti-Modal LearningVideo Understanding
J
Jingyuan Chen
Zhejiang University, Hangzhou, China
Beishui Liao
Beishui Liao
Professor, Zhejiang University
ArgumentationNonmonotonic ReasoningLogicArtificial intelligence