A Survey of Personalized Large Language Models: Progress and Future Directions

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) excel at general-knowledge tasks but struggle to capture user-specific characteristics—such as affective tendencies, writing style, and preferences. This paper presents a systematic survey of personalized large language models (PLLMs), proposing the first three-dimensional taxonomy spanning the input layer (e.g., context-aware prompt customization), model layer (e.g., parameter-efficient fine-tuning methods like LoRA and Adapter), and objective layer (e.g., human preference alignment techniques such as RLHF and DPO). Synthesizing over one hundred seminal works, we identify critical bottlenecks—including privacy-utility trade-offs and long-term memory modeling—and delineate six key frontiers: scalable personalization, safety-aligned adaptation, cross-domain transfer, continual learning, efficient inference, and interpretable personalization. Our framework provides a comprehensive foundation for both theoretical advancement and practical deployment of PLLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel in handling general knowledge tasks, yet they struggle with user-specific personalization, such as understanding individual emotions, writing styles, and preferences. Personalized Large Language Models (PLLMs) tackle these challenges by leveraging individual user data, such as user profiles, historical dialogues, content, and interactions, to deliver responses that are contextually relevant and tailored to each user's specific needs. This is a highly valuable research topic, as PLLMs can significantly enhance user satisfaction and have broad applications in conversational agents, recommendation systems, emotion recognition, medical assistants, and more. This survey reviews recent advancements in PLLMs from three technical perspectives: prompting for personalized context (input level), finetuning for personalized adapters (model level), and alignment for personalized preferences (objective level). To provide deeper insights, we also discuss current limitations and outline several promising directions for future research. Updated information about this survey can be found at the https://github.com/JiahongLiu21/Awesome-Personalized-Large-Language-Models.
Problem

Research questions and friction points this paper is trying to address.

Addressing user-specific personalization in LLMs
Enhancing LLMs with individual user data
Exploring technical advancements in PLLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized prompting techniques
Fine-tuning for adapters
Alignment of user preferences
🔎 Similar Papers
No similar papers found.