CollectiveKV: Decoupling and Sharing Collaborative Information in Sequential Recommendation

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high storage overhead and inference latency in sequential recommendation systems caused by KV caching. It is the first to reveal that KV caches across users contain shareable collaborative signals and proposes a cross-user KV sharing mechanism. By applying singular value decomposition (SVD) to analyze KV structures, the method decouples KV information into a learnable global shared component and a user-specific component, which are dynamically concatenated to reconstruct the full representation. Extensive experiments on five mainstream models and three datasets demonstrate that the proposed approach compresses KV cache size to merely 0.8% of its original footprint while maintaining or even improving recommendation performance, achieving a remarkable balance between efficiency and effectiveness.

Technology Category

Application Category

📝 Abstract
Sequential recommendation models are widely used in applications, yet they face stringent latency requirements. Mainstream models leverage the Transformer attention mechanism to improve performance, but its computational complexity grows with the sequence length, leading to a latency challenge for long sequences. Consequently, KV cache technology has recently been explored in sequential recommendation systems to reduce inference latency. However, KV cache introduces substantial storage overhead in sequential recommendation systems, which often have a large user base with potentially very long user history sequences. In this work, we observe that KV sequences across different users exhibit significant similarities, indicating the existence of collaborative signals in KV. Furthermore, we analyze the KV using singular value decomposition (SVD) and find that the information in KV can be divided into two parts: the majority of the information is shareable across users, while a small portion is user-specific. Motivated by this, we propose CollectiveKV, a cross-user KV sharing mechanism. It captures the information shared across users through a learnable global KV pool. During inference, each user retrieves high-dimensional shared KV from the pool and concatenates them with low-dimensional user-specific KV to obtain the final KV. Experiments on five sequential recommendation models and three datasets show that our method can compress the KV cache to only 0.8% of its original size, while maintaining or even enhancing model performance.
Problem

Research questions and friction points this paper is trying to address.

sequential recommendation
KV cache
storage overhead
latency
long sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

KV cache compression
cross-user sharing
sequential recommendation
collaborative signals
global KV pool
🔎 Similar Papers
No similar papers found.
Jingyu Li
Jingyu Li
University of Science and Technology of China
Deep LearningComputer VisionNatural Language Processing
Zhaocheng Du
Zhaocheng Du
Huawei Noah Ark's Lab
Machine LearningRecommendation System
Q
Qianhui Zhu
Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong 518107, China
K
kaiyuan Li
Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong 518107, China
Zhicheng Zhang
Zhicheng Zhang
Carnegie Mellon University
Reinforcement LearningExplainable RL
S
Song-Li Wu
Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong 518107, China
C
Chaolang Li
School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University, Shenzhen, Guangdong 518107, China
P
Pengwen Dai
School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University, Shenzhen, Guangdong 518107, China