🤖 AI Summary
Graph neural networks suffer significant performance degradation under out-of-distribution (OOD) test scenarios, and existing test-time adaptation methods are prone to catastrophic forgetting. To address this challenge, this work proposes TTReFT, a novel test-time representation fine-tuning framework that shifts the adaptation objective from parameter adjustment to intervention in latent representations. TTReFT integrates three key innovations: uncertainty-guided node selection, low-rank representation updating, and an intervention-aware dynamic masked autoencoder. Extensive experiments across five benchmark datasets demonstrate that TTReFT consistently outperforms current methods in OOD settings, effectively balancing adaptation efficiency with model stability.
📝 Abstract
Graph Neural Networks frequently exhibit significant performance degradation in the out-of-distribution test scenario. While test-time training (TTT) offers a promising solution, existing Parameter Finetuning (PaFT) paradigm suffer from catastrophic forgetting, hindering their real-world applicability. We propose TTReFT, a novel Test-Time Representation FineTuning framework that transitions the adaptation target from model parameters to latent representations. Specifically, TTReFT achieves this through three key innovations: (1) uncertainty-guided node selection for specific interventions, (2) low-rank representation interventions that preserve pre-trained knowledge, and (3) an intervention-aware masked autoencoder that dynamically adjust masking strategy to accommodate the node selection scheme. Theoretically, we establish guarantees for TTReFT in OOD settings. Empirically, extensive experiments across five benchmark datasets demonstrate that TTReFT achieves consistent and superior performance. Our work establishes representation finetuning as a new paradigm for graph TTT, offering both theoretical grounding and immediate practical utility for real-world deployment.