🤖 AI Summary
This work addresses the challenges posed by colloquial expressions and incomplete utterances in multi-turn, multi-party dialogues, which often hinder contextual understanding and distort structural representation, thereby degrading response generation quality. To mitigate these issues, the authors propose a context rewriting framework that integrates discourse coherence modeling with response quality guidance, complemented by a dynamic self-evolving learning mechanism. This mechanism constructs preference data grounded in both coherence and response quality to enable synergistic, iterative optimization between the rewriter and the generator. Experimental results across four multi-party dialogue datasets demonstrate that the proposed approach significantly enhances the contextual consistency and overall coherence of generated responses.
📝 Abstract
Previous research on multi-party dialogue generation has predominantly leveraged structural information inherent in dialogues to directly inform the generation process. However, the prevalence of colloquial expressions and incomplete utterances in dialogues often impedes comprehension and weakens the fidelity of dialogue structure representations, which is particularly pronounced in multi-party dialogues. In this work, we propose a novel framework DRCR (Discourse coherence and Response-guided Context Rewriting) to improve multi-party dialogue generation through dialogue context rewriting. Specifically, DRCR employs two complementary feedback signals, discourse coherence and response quality, to construct preference data for both context rewriting and response generation. Moreover, we propose a dynamic self-evolution learning method that allows the rewriter and responder to continuously enhance their capabilities through mutual interaction in an iterative training loop. Comprehensive experiments conducted on four multi-party dialogue datasets substantiate the effectiveness of DRCR.