🤖 AI Summary
To address the limited personalization capability of large language models (LLMs) and their difficulty in adapting to niche user preferences, this paper proposes “preference abduction”—a novel paradigm that extends conventional binary preference learning (“which response is better?”) to ternary reasoning (“why and for whom is it better?”). Methodologically, we employ abductive inference to reverse-engineer user personas from preference pairs—particularly from rejected responses—and integrate persona modeling, persona-guided preference data augmentation, and Persona-Tailored Instruction Tuning for fine-grained response adaptation. Experiments demonstrate substantial improvements in response accuracy under customized personas; our approach outperforms mainstream alignment methods on long-tail, niche preference benchmarks. These results validate the effectiveness, generalizability, and interpretability of persona-enhanced alignment.
📝 Abstract
LLMs are tuned to follow instructions (aligned) by learning which of two outputs users prefer for a prompt. However, this preference data format does not convey why users prefer responses that are chosen or rejected, so LLMs trained on these datasets cannot tailor responses to varied user needs. To surface these parameters of personalization, we apply abductive reasoning to preference data, inferring needs and interests of users, i.e. personas, that may prefer each output. We test this idea in two steps: Persona Inference (PI)-abductively inferring personas of users who prefer chosen or rejected outputs-and Persona Tailoring (PT)-training models to tailor responses to personas from PI. We find: 1) LLMs infer personas accurately explaining why different users may prefer both chosen or rejected outputs; 2) Training on preference data augmented with PI personas via PT boosts personalization, enabling models to support user-written personas; and 3) Rejected response personas form harder personalization evaluations, showing PT better aids users with uncommon preferences versus typical alignment methods. We argue for an abductive view of preferences for personalization, asking not only which response is better but when, why, and for whom.