CURP: Codebook-based Continuous User Representation for Personalized Generation with LLMs

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing personalized generation methods struggle to balance output quality with computational and data efficiency. This work proposes a lightweight framework based on a discrete prototype codebook, which employs a bidirectional user encoder to extract multidimensional user features and constructs a plug-and-play continuous user representation. By introducing only approximately 0.2% additional trainable parameters, the approach enables efficient, interpretable, and scalable personalized generation. Integrating both large language model fine-tuning and prompt engineering, the method significantly outperforms strong baselines across multiple generation tasks while demonstrating superior generalization and practical utility.

Technology Category

Application Category

📝 Abstract
User modeling characterizes individuals through their preferences and behavioral patterns to enable personalized simulation and generation with Large Language Models (LLMs) in contemporary approaches. However, existing methods, whether prompt-based or training-based methods, face challenges in balancing personalization quality against computational and data efficiency. We propose a novel framework CURP, which employs a bidirectional user encoder and a discrete prototype codebook to extract multi-dimensional user traits. This design enables plug-and-play personalization with a small number of trainable parameters (about 20M parameters, about 0.2\% of the total model size). Through extensive experiments on variant generation tasks, we show that CURP achieves superior performance and generalization compared to strong baselines, while offering better interpretability and scalability. The code are available at https://github.com/RaidonWong/CURP_code
Problem

Research questions and friction points this paper is trying to address.

user modeling
personalized generation
computational efficiency
data efficiency
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Codebook-based representation
Continuous user modeling
Plug-and-play personalization
Efficient LLM personalization
Discrete prototype learning
🔎 Similar Papers
No similar papers found.
L
Liang Wang
School of Data Science, Fudan University
Xinyi Mou
Xinyi Mou
Fudan University
NLPLarge Language ModelsSocial Simulation
X
Xiaoyou Liu
School of Data Science, Fudan University
X
Xuanjing Huang
School of Computer Science, Fudan University
Z
Zhongyu Wei
School of Data Science, Fudan University, Shanghai Innovation Institute