Improving Dialogue Discourse Parsing through Discourse-aware Utterance Clarification

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Discourse parsing in dialogues suffers from semantic ambiguity caused by linguistic phenomena such as ellipsis and idiomatic expressions, severely hindering accurate discourse relation identification. To address this, we propose a discourse-aware clarification mechanism. Our method introduces (1) a novel Dual-path Clarification Module (DCM) that jointly models discourse type and discourse goal in parallel, and (2) Contribution-aware Preference Optimization (CPO), a reinforcement-learning-based framework that enables co-adaptive training of the clarification process and the parser via reward-driven feedback. The entire system is trained end-to-end in a unified optimization framework. Experiments on STAC and Molweni demonstrate substantial improvements over state-of-the-art methods, with F1 gains of 3.2–4.7 percentage points. Our approach effectively mitigates error propagation stemming from semantic ambiguity in dialogue discourse parsing.

Technology Category

Application Category

📝 Abstract
Dialogue discourse parsing aims to identify and analyze discourse relations between the utterances within dialogues. However, linguistic features in dialogues, such as omission and idiom, frequently introduce ambiguities that obscure the intended discourse relations, posing significant challenges for parsers. To address this issue, we propose a Discourse-aware Clarification Module (DCM) to enhance the performance of the dialogue discourse parser. DCM employs two distinct reasoning processes: clarification type reasoning and discourse goal reasoning. The former analyzes linguistic features, while the latter distinguishes the intended relation from the ambiguous one. Furthermore, we introduce Contribution-aware Preference Optimization (CPO) to mitigate the risk of erroneous clarifications, thereby reducing cascading errors. CPO enables the parser to assess the contributions of the clarifications from DCM and provide feedback to optimize the DCM, enhancing its adaptability and alignment with the parser's requirements. Extensive experiments on the STAC and Molweni datasets demonstrate that our approach effectively resolves ambiguities and significantly outperforms the state-of-the-art (SOTA) baselines.
Problem

Research questions and friction points this paper is trying to address.

Resolve ambiguities in dialogue discourse relations caused by linguistic features
Enhance parser performance with Discourse-aware Clarification Module (DCM)
Mitigate erroneous clarifications using Contribution-aware Preference Optimization (CPO)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discourse-aware Clarification Module resolves ambiguities
Contribution-aware Preference Optimization reduces errors
Two reasoning processes enhance parser performance
🔎 Similar Papers
No similar papers found.
Y
Yaxin Fan
School of Computer Science and Technology, Soochow University, Suzhou, China
P
Peifeng Li
School of Computer Science and Technology, Soochow University, Suzhou, China
Qiaoming Zhu
Qiaoming Zhu
Soochow University
Natural Language Processing