🤖 AI Summary
This work addresses the limitations of open-source large language models in generating personalized dialogue responses that simultaneously exhibit contextual coherence, role consistency, and strict adherence to instructions. To overcome this challenge, the authors propose PersoDPO, a novel framework that integrates automatic evaluation signals from multiple closed- and open-source large language models to construct high-quality preference pairs without human annotation. Building upon an enhanced Direct Preference Optimization (DPO) approach, PersoDPO incorporates multidimensional evaluation criteria—including personalization, contextual coherence, and format compliance—to enable efficient and scalable unsupervised preference learning. Experimental results on the FoCus dataset demonstrate that open-source models fine-tuned with PersoDPO significantly outperform strong baselines and standard DPO across multiple key dimensions.
📝 Abstract
Personalization and contextual coherence are two essential components in building effective persona-grounded dialogue systems. These aspects play a crucial role in enhancing user engagement and ensuring responses are more relevant and consistent with user identity. However, recent studies indicate that open-source large language models (LLMs) continue to struggle to generate responses that are both contextually grounded and aligned with persona cues, despite exhibiting strong general conversational abilities like fluency and naturalness. We present PersoDPO, a scalable preference optimisation framework that uses supervision signals from automatic evaluations of responses generated by both closed-source and open-source LLMs to fine-tune dialogue models. The framework integrates evaluation metrics targeting coherence and personalization, along with a length-format compliance feature to promote instruction adherence. These signals are combined to automatically construct high-quality preference pairs without manual annotation, enabling a scalable and reproducible training pipeline. Experiments on the FoCus dataset show that an open-source language model fine-tuned with the PersoDPO framework consistently outperforms strong open-source baselines and a standard Direct Preference Optimization (DPO) variant across multiple evaluation dimensions.