🤖 AI Summary
In crisis response scenarios, inconsistent response styles generated by language models undermine trust among affected individuals, yet existing work lacks a systematic solution to this problem. This paper proposes a dynamic fusion generation method: first constructing an instance-level candidate response set, then selecting the optimal output via a two-stage fusion mechanism that jointly optimizes semantic quality and stylistic consistency. We innovatively design a quantitative metric for stylistic consistency and establish an evaluation-feedback-driven fusion framework. Experiments across multiple crisis dialogue datasets demonstrate that our approach significantly outperforms baselines in both response quality (measured by BLEU and ROUGE) and stylistic stability (assessed by our novel metric), thereby enhancing the credibility and practical utility of automated crisis responses.
📝 Abstract
In response to the urgent need for effective communication with crisis-affected populations, automated responses driven by language models have been proposed to assist in crisis communications. A critical yet often overlooked factor is the consistency of response style, which could affect the trust of affected individuals in responders. Despite its importance, few studies have explored methods for maintaining stylistic consistency across generated responses. To address this gap, we propose a novel metric for evaluating style consistency and introduce a fusion-based generation approach grounded in this metric. Our method employs a two-stage process: it first assesses the style of candidate responses and then optimizes and integrates them at the instance level through a fusion process. This enables the generation of high-quality responses while significantly reducing stylistic variation between instances. Experimental results across multiple datasets demonstrate that our approach consistently outperforms baselines in both response quality and stylistic uniformity.