A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenge of aligning large language models (LLMs) with personalized and diverse user preferences. We systematically survey three technical paradigms—training-time alignment, inference-time adaptation, and user modeling—and formally distinguish “personalization” (individual-level customization) from “diversity” (group-level heterogeneity). First, we propose a unified taxonomy and structured technical map covering both dimensions, synthesizing over 12 mainstream methods and 7 dedicated benchmarks. Second, we introduce a comprehensive evaluation framework that identifies key open challenges: cross-user transferability, fairness across demographic or behavioral subgroups, and systematic biases in current evaluation protocols. Our analysis integrates supervised fine-tuning, RLHF, prompt engineering, and implicit behavioral modeling through rigorous literature review, method categorization, and critical comparative analysis. This work delivers the first systematic theoretical synthesis of LLM personalization and diversity alignment, offering foundational insights and concrete directions for future research.

Technology Category

Application Category

📝 Abstract
Personalized preference alignment for large language models (LLMs), the process of tailoring LLMs to individual users' preferences, is an emerging research direction spanning the area of NLP and personalization. In this survey, we present an analysis of works on personalized alignment and modeling for LLMs. We introduce a taxonomy of preference alignment techniques, including training time, inference time, and additionally, user-modeling based methods. We provide analysis and discussion on the strengths and limitations of each group of techniques and then cover evaluation, benchmarks, as well as open problems in the field.
Problem

Research questions and friction points this paper is trying to address.

Surveying personalized alignment techniques for LLMs
Analyzing training and inference time preference methods
Evaluating benchmarks and open problems in LLM personalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training time preference alignment techniques
Inference time preference alignment techniques
User-modeling based preference alignment methods
🔎 Similar Papers
No similar papers found.