Minority-Aware Satisfaction Estimation in Dialogue Systems via Preference-Adaptive Reinforcement Learning

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
User satisfaction in dialogue systems is inherently subjective; existing “one-size-fits-all” alignment methods overlook minority-group preferences, leading to biased satisfaction predictions. To address this, we propose a fairness-aware modeling framework that jointly accounts for individual and group-level heterogeneity: (1) CoPeR—a chain-of-preference reasoning module—explicitly models fine-grained individual preferences; (2) M2PC—an unsupervised clustering method—discovers latent user group structures without labeled group annotations; and (3) PAda-PPO—a policy-optimization algorithm—jointly maximizes both individual and group-level satisfaction objectives via reinforcement learning. Experiments on an emotional support dialogue dataset demonstrate that our approach significantly improves overall satisfaction estimation accuracy (+4.2% reduction in MAE) and yields substantial gains for underrepresented groups (18.7% lower prediction error), establishing a new paradigm for personalized and equitable dialogue evaluation.

Technology Category

Application Category

📝 Abstract
User satisfaction in dialogue systems is inherently subjective. When the same response strategy is applied across users, minority users may assign different satisfaction ratings than majority users due to variations in individual intents and preferences. However, existing alignment methods typically train one-size-fits-all models that aim for broad consensus, often overlooking minority perspectives and user-specific adaptation. We propose a unified framework that models both individual- and group-level preferences for user satisfaction estimation. First, we introduce Chain-of-Personalized-Reasoning (CoPeR) to capture individual preferences through interpretable reasoning chains. Second, we propose an expectation-maximization-based Majority-Minority Preference-Aware Clustering (M2PC) algorithm that discovers distinct user groups in an unsupervised manner to learn group-level preferences. Finally, we integrate these components into a preference-adaptive reinforcement learning framework (PAda-PPO) that jointly optimizes alignment with both individual and group preferences. Experiments on the Emotional Support Conversation dataset demonstrate consistent improvements in user satisfaction estimation, particularly for underrepresented user groups.
Problem

Research questions and friction points this paper is trying to address.

Estimating minority user satisfaction in dialogue systems via preference modeling
Addressing limitations of one-size-fits-all alignment methods for diverse users
Jointly optimizing individual and group preferences through adaptive reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models individual and group preferences via unified framework
Uses Chain-of-Personalized-Reasoning to capture individual preferences
Integrates preference-adaptive reinforcement learning for joint optimization
🔎 Similar Papers
No similar papers found.