🤖 AI Summary
User satisfaction in dialogue systems is inherently subjective; existing “one-size-fits-all” alignment methods overlook minority-group preferences, leading to biased satisfaction predictions. To address this, we propose a fairness-aware modeling framework that jointly accounts for individual and group-level heterogeneity: (1) CoPeR—a chain-of-preference reasoning module—explicitly models fine-grained individual preferences; (2) M2PC—an unsupervised clustering method—discovers latent user group structures without labeled group annotations; and (3) PAda-PPO—a policy-optimization algorithm—jointly maximizes both individual and group-level satisfaction objectives via reinforcement learning. Experiments on an emotional support dialogue dataset demonstrate that our approach significantly improves overall satisfaction estimation accuracy (+4.2% reduction in MAE) and yields substantial gains for underrepresented groups (18.7% lower prediction error), establishing a new paradigm for personalized and equitable dialogue evaluation.
📝 Abstract
User satisfaction in dialogue systems is inherently subjective. When the same response strategy is applied across users, minority users may assign different satisfaction ratings than majority users due to variations in individual intents and preferences. However, existing alignment methods typically train one-size-fits-all models that aim for broad consensus, often overlooking minority perspectives and user-specific adaptation. We propose a unified framework that models both individual- and group-level preferences for user satisfaction estimation. First, we introduce Chain-of-Personalized-Reasoning (CoPeR) to capture individual preferences through interpretable reasoning chains. Second, we propose an expectation-maximization-based Majority-Minority Preference-Aware Clustering (M2PC) algorithm that discovers distinct user groups in an unsupervised manner to learn group-level preferences. Finally, we integrate these components into a preference-adaptive reinforcement learning framework (PAda-PPO) that jointly optimizes alignment with both individual and group preferences. Experiments on the Emotional Support Conversation dataset demonstrate consistent improvements in user satisfaction estimation, particularly for underrepresented user groups.