Many Preferences, Few Policies: Towards Scalable Language Model Personalization

πŸ“… 2026-04-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of deploying large-scale personalized large language models (LLMs), which is hindered by prohibitive computational and memory costs. To overcome this, the authors propose PALM (Portfolio of Aligned LLMs), a method that constructs a compact yet representative portfolio of LLMs and models each user’s multidimensional preferences as a weight vector over the portfolio. Personalization is achieved by rapidly selecting a near-optimal model via a scalarized multi-objective reward function. The study provides the first theoretical guarantees on both the size of the LLM portfolio and the approximation quality of the personalized model, explicitly characterizing the trade-off between system cost and personalization performance. Experimental results demonstrate that PALM significantly enhances output diversity while maintaining a small portfolio size, outperforming existing baselines.
πŸ“ Abstract
The holy grail of LLM personalization is a single LLM for each user, perfectly aligned with that user's preferences. However, maintaining a separate LLM per user is impractical due to constraints on compute, memory, and system complexity. We address this challenge by developing a principled method for selecting a small portfolio of LLMs that captures representative behaviors across heterogeneous users. We model user preferences across multiple traits (e.g., safety, humor, brevity) through a multi-dimensional weight vector. Given reward functions across these dimensions, our algorithm PALM (Portfolio of Aligned LLMs) generates a small portfolio of LLMs such that, for any weight vector, the portfolio contains a near-optimal LLM for the corresponding scalarized objective. To the best of our knowledge, this is the first result that provides theoretical guarantees on both the size and approximation quality of LLM portfolios for personalization. It characterizes the trade-off between system cost and personalization, as well as the diversity of LLMs required to cover the landscape of user preferences. We provide empirical results that validate these guarantees and demonstrate greater output diversity over common baselines.
Problem

Research questions and friction points this paper is trying to address.

LLM personalization
user preferences
scalable AI
model portfolio
preference alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM personalization
model portfolio
multi-objective alignment
theoretical guarantees
preference modeling
πŸ”Ž Similar Papers
No similar papers found.