🤖 AI Summary
To address intent drift and response inconsistency in goal-oriented proactive dialogue systems, this paper proposes a consistency reflection and correction mechanism. Our method uniquely integrates dialogue state tracking (DST) with explicit goal graph modeling to construct an LLM-based consistency discriminator that dynamically detects logical conflicts among dialogue state, user goals, and system responses. A lightweight correction decoder then performs real-time response adjustment. Evaluated on MultiWOZ and SGD benchmarks, our approach improves task success rate by 8.2%, reduces inconsistency error rate by 37%, and incurs less than 5% additional inference latency. These results demonstrate substantial gains in task completion and user satisfaction, establishing a novel paradigm for controllable and interpretable proactive dialogue systems.
📝 Abstract
This paper proposes a consistency reflection and correction method for goal-oriented dialogue systems.