🤖 AI Summary
To address the high inference latency induced by KV cache in LLM-based recommender systems (LLMRec), this paper proposes EARN, an efficient inference framework. We first uncover two LLMRec-specific attention patterns: inter-layer attention sparsity inversion and head-tail dual attention concentration. Leveraging these insights, EARN introduces a sequence-boundary register token mechanism for information compression: historical interaction sequences are compressed into a small set of register tokens in early layers, and subsequent layers attend exclusively to these tokens—drastically reducing KV cache memory footprint and computational overhead. The method integrates attention pattern analysis, hierarchical information compression, and register token injection, while synergistically combining LLM inference acceleration with recommendation-specific fine-tuning. Experiments across three benchmark datasets, two LLMRec paradigms, and two model architectures demonstrate that EARN achieves up to 3.79× inference speedup and 80.8% KV cache reduction, while outperforming general-purpose fine-tuning baselines in recommendation accuracy.
📝 Abstract
Large Language Model-based generative recommendation (LLMRec) has achieved notable success, but it suffers from high inference latency due to massive computational overhead and memory pressure of KV Cache. Existing KV Cache reduction methods face critical limitations: cache compression offers marginal acceleration given recommendation tasks' short decoding steps, while prompt compression risks discarding vital interaction history. Through systematic analysis of attention patterns in LLMRec, we uncover two pivotal insights: 1) layer-wise attention sparsity inversion where early layers retain dense informative patterns while later layers exhibit high redundancy, and 2) dual attention sinks phenomenon where attention scores concentrate on both head and tail tokens of input sequences. Motivated by these insights, we propose EARN, an efficient inference framework that leverages the early layers to compress information into register tokens placed at the input sequence boundaries, then focuses solely on these tokens in the subsequent layers. Extensive experiments on three datasets, two LLMRec methods and two LLM architectures demonstrate EARN's superiority, achieving up to 3.79x speedup and 80.8% KV Cache reduction with better accuracy than the general finetuning approach. Our work bridges the efficiency-effectiveness gap in LLMRec, offering practical deployment advantages for industrial scenarios.