Decoding in Latent Spaces for Efficient Inference in LLM-based Recommendation

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference overhead induced by autoregressive token-level decoding in large language model (LLM)-based recommender systems, this paper proposes Light Latent-space Decoding (L2D). Instead of generating recommendations autoregressively at the token level, L2D directly aligns user preference representations with candidate item representations within an intermediate latent layer of the LLM, performing recommendation via similarity matching in this latent space. This approach pioneers the shift of generative recommendation from the output token space to the internal hidden state space, enabling end-to-end efficient inference without modifying the model architecture or training objective. Experiments demonstrate that L2D accelerates inference by over 10× while maintaining or surpassing the recommendation accuracy of conventional language-space decoding across multiple benchmark datasets. Consequently, L2D significantly enhances the practicality and deployability of generative recommendation systems.

Technology Category

Application Category

📝 Abstract
Fine-tuning large language models (LLMs) for recommendation in a generative manner has delivered promising results, but encounters significant inference overhead due to autoregressive decoding in the language space. This work explores bypassing language-space decoding by directly matching candidate items with the LLM's internal thought representations in the latent space, eliminating the time-consuming autoregressive process to reduce computational costs. Towards this, we introduce Light Latent-space Decoding (L2D), an effective and efficient latent-space decoding method. L2D represents user-preferred items by using the hidden states of test sequences reflecting the LLM's internal thought, and obtains candidate item representations from the hidden states of training sequences labeled with the corresponding candidate items. It then matches the two types of representations to decode items, achieving latent-space decoding. In this way, it enables efficient decoding without altering the LLM's generative tuning paradigm, thereby preserving performance. Extensive empirical results demonstrate that L2D is more than 10x faster than language-space decoding while maintaining or enhancing performance.
Problem

Research questions and friction points this paper is trying to address.

Reducing LLM inference overhead via latent-space decoding
Matching candidate items with internal thought representations
Eliminating autoregressive process to cut computational costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent-space decoding bypasses language-space autoregressive process
Matching candidate items with internal thought representations
Preserves performance while reducing computational costs significantly
🔎 Similar Papers
2024-08-08International Workshop on Semantic and Social Media Adaptation and PersonalizationCitations: 13
C
Chengbing Wang
University of Science and Technology of China
Y
Yang Zhang
National University of Singapore
Z
Zhicheng Wang
University of Science and Technology of China
T
Tianhao Shi
University of Science and Technology of China
Keqin Bao
Keqin Bao
University of Science and Technology of China
Large Language ModelsRecommender Systems
F
Fuli Feng
National University of Singapore
T
Tat-Seng Chua
National University of Singapore