🤖 AI Summary
To address two key bottlenecks in reward-driven proactive dialogue—weakly supervised label noise induced by ASR errors and sparse user feedback in long-tail domains—this paper proposes a novel self-supervised framework. Methodologically, it introduces a dual auxiliary task combining contrastive learning with joint domain-intent classification to learn robust user conversational representations, coupled with a weakly supervised robust training strategy. Its core innovations are: (i) leveraging contrastive learning to disentangle semantic content from ASR artifacts, thereby improving discrimination of erroneous utterances; and (ii) incorporating domain-intent priors to mitigate long-tail bias and enhance generalization to infrequent scenarios. Evaluated on the DuerOS platform, the method achieves significant gains: +12.3% accuracy in rare error identification and +9.7% F1 score in long-tail domain satisfaction prediction. This work establishes a scalable, intrinsic reward modeling paradigm for industrial-grade proactive dialogue systems.
📝 Abstract
Reward-driven proactive dialogue agents require precise estimation of user satisfaction as an intrinsic reward signal to determine optimal interaction strategies. Specifically, this framework triggers clarification questions when detecting potential user dissatisfaction during interactions in the industrial dialogue system. Traditional works typically rely on training a neural network model based on weak labels which are generated by a simple model trained on user actions after current turn. However, existing methods suffer from two critical limitations in real-world scenarios: (1) Noisy Reward Supervision, dependence on weak labels derived from post-hoc user actions introduces bias, particularly failing to capture satisfaction signals in ASR-error-induced utterances; (2) Long-Tail Feedback Sparsity, the power-law distribution of user queries causes reward prediction accuracy to drop in low-frequency domains. The noise in the weak labels and a power-law distribution of user utterances results in that the model is hard to learn good representation of user utterances and sessions. To address these limitations, we propose two auxiliary tasks to improve the representation learning of user utterances and sessions that enhance user satisfaction prediction. The first one is a contrastive self-supervised learning task, which helps the model learn the representation of rare user utterances and identify ASR errors. The second one is a domain-intent classification task, which aids the model in learning the representation of user sessions from long-tailed domains and improving the model's performance on such domains. The proposed method is evaluated on DuerOS, demonstrating significant improvements in the accuracy of error recognition on rare user utterances and long-tailed domains.