🤖 AI Summary
This paper addresses the longstanding fragmentation between “personalized text generation” and “personalized downstream applications” (e.g., recommendation systems) in large language model (LLM) personalization research. To bridge this gap, we propose the first unified, multidimensional taxonomy and formal framework for personalized LLMs. Our framework systematically integrates diverse techniques—including parameter-efficient fine-tuning, prompt engineering, memory augmentation, user modeling, and preference alignment—across five dimensions: granularity, technical methodology, data paradigm, evaluation criteria, and application scenarios. We comprehensively survey existing benchmarks, metrics, and open challenges. Crucially, we formally define the novel paradigm, usage patterns, and ideal properties of personalized LLMs, and construct a full-stack, structured knowledge map. This work unifies disparate strands of research, resolves conceptual ambiguities, and establishes a rigorous theoretical foundation and practical roadmap for future investigation and deployment.
📝 Abstract
Personalization of Large Language Models (LLMs) has recently become increasingly important with a wide range of applications. Despite the importance and recent progress, most existing works on personalized LLMs have focused either entirely on (a) personalized text generation or (b) leveraging LLMs for personalization-related downstream applications, such as recommendation systems. In this work, we bridge the gap between these two separate main directions for the first time by introducing a taxonomy for personalized LLM usage and summarizing the key differences and challenges. We provide a formalization of the foundations of personalized LLMs that consolidates and expands notions of personalization of LLMs, defining and discussing novel facets of personalization, usage, and desiderata of personalized LLMs. We then unify the literature across these diverse fields and usage scenarios by proposing systematic taxonomies for the granularity of personalization, personalization techniques, datasets, evaluation methods, and applications of personalized LLMs. Finally, we highlight challenges and important open problems that remain to be addressed. By unifying and surveying recent research using the proposed taxonomies, we aim to provide a clear guide to the existing literature and different facets of personalization in LLMs, empowering both researchers and practitioners.