PersonalLLM: Tailoring LLMs to Individual Preferences

๐Ÿ“… 2024-09-30
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 4
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing LLM alignment benchmarks assume homogeneous user preferences, failing to capture inter-individual heterogeneity and fine-grained preference variations. This work introduces PersonalLLMโ€”the first publicly available benchmark explicitly designed for individualized preference modeling under sparse feedback. Methodologically, we propose a novel heterogeneous preference simulation mechanism grounded in pre-trained reward models, overcoming the homogenization bias inherent in personality-based prompting. We further develop a scalable personalization adaptation framework integrating multi-reward-model distillation, meta-learning, and in-context learning. Our contributions include: (1) releasing a high-quality, open-source dataset; (2) substantially improving few-shot user preference modeling performance; and (3) providing the community with the first benchmark and toolchain supporting continual personalized adaptation.

Technology Category

Application Category

๐Ÿ“ Abstract
As LLMs become capable of complex tasks, there is growing potential for personalized interactions tailored to the subtle and idiosyncratic preferences of the user. We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user. Departing from existing alignment benchmarks that implicitly assume uniform preferences, we curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences. Instead of persona-prompting LLMs based on high-level attributes (e.g., user's race or response length), which yields homogeneous preferences relative to humans, we develop a method that can simulate a large user base with diverse preferences from a set of pre-trained reward models. Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms that grapple with continual data sparsity--few relevant feedback from the particular user--by leveraging historical data from other (similar) users. We explore basic in-context learning and meta-learning baselines to illustrate the utility of PersonalLLM and highlight the need for future methodological development. Our dataset is available at https://huggingface.co/datasets/namkoong-lab/PersonalLLM
Problem

Research questions and friction points this paper is trying to address.

Personalizing LLMs to individual preferences
Developing methods for diverse user preferences
Addressing data sparsity in personalization algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

PersonalLLM benchmark for user-specific adaptation
Diverse user preferences from reward models
Leveraging historical data for personalization algorithms
๐Ÿ”Ž Similar Papers
No similar papers found.
T
Thomas P. Zollo
Columbia University
A
A. Siah
Columbia University
Naimeng Ye
Naimeng Ye
Ph.D. Student, Columbia University
Machine Learning
A
Ang Li
Columbia University
Hongseok Namkoong
Hongseok Namkoong
Columbia University
AISequential Decision-making