🤖 AI Summary
Existing personalized generation methods struggle to balance output quality with computational and data efficiency. This work proposes a lightweight framework based on a discrete prototype codebook, which employs a bidirectional user encoder to extract multidimensional user features and constructs a plug-and-play continuous user representation. By introducing only approximately 0.2% additional trainable parameters, the approach enables efficient, interpretable, and scalable personalized generation. Integrating both large language model fine-tuning and prompt engineering, the method significantly outperforms strong baselines across multiple generation tasks while demonstrating superior generalization and practical utility.
📝 Abstract
User modeling characterizes individuals through their preferences and behavioral patterns to enable personalized simulation and generation with Large Language Models (LLMs) in contemporary approaches. However, existing methods, whether prompt-based or training-based methods, face challenges in balancing personalization quality against computational and data efficiency. We propose a novel framework CURP, which employs a bidirectional user encoder and a discrete prototype codebook to extract multi-dimensional user traits. This design enables plug-and-play personalization with a small number of trainable parameters (about 20M parameters, about 0.2\% of the total model size). Through extensive experiments on variant generation tasks, we show that CURP achieves superior performance and generalization compared to strong baselines, while offering better interpretability and scalability. The code are available at https://github.com/RaidonWong/CURP_code