π€ AI Summary
In dynamic recommendation, large-scale graph fine-tuning faces dual challenges: prohibitive computational overhead and inadequate representation of sparse nodesβboth stemming from degree distribution long-tails induced by interaction sparsity. To address this, we propose GraphSASA, a parameter-efficient dynamic graph recommendation framework. GraphSASA enhances robustness of sparse-node representations via test-time hierarchical graph augmentation and introduces singular-value adaptive fine-tuning: it freezes the backbone parameters and optimizes only a low-rank singular-value subspace. This strategy reduces trainable parameters by over 90% while significantly improving long-tail node modeling. Evaluated on three large-scale dynamic graph datasets, GraphSASA achieves state-of-the-art recommendation performance at substantially lower computational cost, effectively balancing accuracy and efficiency.
π Abstract
Dynamic recommendation, focusing on modeling user preference from historical interactions and providing recommendations on current time, plays a key role in many personalized services. Recent works show that pre-trained dynamic graph neural networks (GNNs) can achieve excellent performance. However, existing methods by fine-tuning node representations at large scales demand significant computational resources. Additionally, the long-tail distribution of degrees leads to insufficient representations for nodes with sparse interactions, posing challenges for efficient fine-tuning. To address these issues, we introduce GraphSASA, a novel method for efficient fine-tuning in dynamic recommendation systems. GraphSASA employs test-time augmentation by leveraging the similarity of node representation distributions during hierarchical graph aggregation, which enhances node representations. Then it applies singular value decomposition, freezing the original vector matrix while focusing fine-tuning on the derived singular value matrices, which reduces the parameter burden of fine-tuning and improves the fine-tuning adaptability. Experimental results demonstrate that our method achieves state-of-the-art performance on three large-scale datasets.