Lighter-X: An Efficient and Plug-and-play Strategy for Graph-based Recommendation through Decoupled Propagation

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional graph neural network (GNN)-based recommender models (e.g., LightGCN) exhibit parameter complexity linear in the total number of users and items, *n*, i.e., 𝒪(*n*×*d*), hindering scalability to large-scale deployments. To address this, we propose a plug-and-play lightweight framework that decouples the message propagation mechanism and jointly compresses both the adjacency structure and embedding representations: (i) sparsifying the user-item adjacency matrix, and (ii) projecting high-dimensional embeddings into a low-dimensional subspace of dimension *h* ≪ *n*. This reduces overall parameter complexity to 𝒪(*h*×*d*), without altering the backbone GNN architecture. The framework is model-agnostic, compatible with mainstream GNN recommenders, and supports end-to-end training and efficient inference. Extensive experiments on large-scale benchmarks demonstrate that our method achieves comparable—or even superior—recommendation accuracy using only 1% of LightGCN’s parameters, significantly enhancing scalability and deployment efficiency.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have demonstrated remarkable effectiveness in recommendation systems. However, conventional graph-based recommenders, such as LightGCN, require maintaining embeddings of size $d$ for each node, resulting in a parameter complexity of $mathcal{O}(n imes d)$, where $n$ represents the total number of users and items. This scaling pattern poses significant challenges for deployment on large-scale graphs encountered in real-world applications. To address this scalability limitation, we propose extbf{Lighter-X}, an efficient and modular framework that can be seamlessly integrated with existing GNN-based recommender architectures. Our approach substantially reduces both parameter size and computational complexity while preserving the theoretical guarantees and empirical performance of the base models, thereby enabling practical deployment at scale. Specifically, we analyze the original structure and inherent redundancy in their parameters, identifying opportunities for optimization. Based on this insight, we propose an efficient compression scheme for the sparse adjacency structure and high-dimensional embedding matrices, achieving a parameter complexity of $mathcal{O}(h imes d)$, where $h ll n$. Furthermore, the model is optimized through a decoupled framework, reducing computational complexity during the training process and enhancing scalability. Extensive experiments demonstrate that Lighter-X achieves comparable performance to baseline models with significantly fewer parameters. In particular, on large-scale interaction graphs with millions of edges, we are able to attain even better results with only 1% of the parameter over LightGCN.
Problem

Research questions and friction points this paper is trying to address.

Reducing parameter complexity in graph-based recommendation systems
Addressing scalability challenges for large-scale graph deployments
Optimizing computational efficiency while maintaining model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled propagation framework for graph-based recommendation
Efficient compression scheme for adjacency and embedding matrices
Plug-and-play modular framework reducing parameter complexity