Personalize Your LLM: Fake it then Align it

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual bottlenecks in large language model (LLM) personalization—high computational cost of fine-tuning and heavy reliance of retrieval-based methods on large-scale, high-quality annotated data—this paper proposes a lightweight, annotation-free personalization paradigm termed “Fabricate-then-Realign.” It first employs self-supervised preference data synthesis via LLM-generated preferences, then injects user-specific preferences into the model through low-rank latent-space representation editing—eliminating the need for user-specific fine-tuning or human annotation. The method achieves zero-shot adaptability and cross-architecture deployability (e.g., across Llama and Phi families). Evaluated on multi-task benchmarks including LaMP, it outperforms two representative baseline approaches by an average of 40%, demonstrating substantial improvements in instruction-tuned model personalization.

Technology Category

Application Category

📝 Abstract
Personalizing large language models (LLMs) is essential for delivering tailored interactions that improve user experience. Many existing personalization methods require fine-tuning LLMs for each user, rendering them prohibitively expensive for widespread adoption. Although retrieval-based approaches offer a more compute-efficient alternative, they still depend on large, high-quality datasets that are not consistently available for all users. To address this challenge, we propose CHAMELEON, a scalable and efficient personalization approach that uses (1) self-generated personal preference data and (2) representation editing to enable quick and cost-effective personalization. Our experiments on various tasks, including those from the LaMP personalization benchmark, show that CHAMELEON efficiently adapts models to personal preferences, improving instruction-tuned models and outperforms two personalization baselines by an average of 40% across two model architectures.
Problem

Research questions and friction points this paper is trying to address.

Personalizing LLMs for tailored user interactions
Reducing cost and computational demands of personalization
Overcoming dependency on large, high-quality datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-generated personal preference data usage
Representation editing for quick personalization
Efficient adaptation to personal preferences