🤖 AI Summary
To address the challenge of timely rebuttal generation against emerging hate speech on social media, existing methods relying on static rebuttal corpora suffer from poor generalizability and delayed updates. This paper proposes a zero-shot counter-narrative generation framework that produces highly specific and logically robust counter-narratives for unseen hate speech without requiring domain-specific annotations or model fine-tuning. Our approach features two key innovations: (1) a multi-dimensional hierarchical retrieval mechanism jointly modeling stance consistency, semantic similarity, and contextual adaptability; and (2) an energy-based constrained decoding framework that differentiably integrates knowledge preservation, rebuttal strength, and fluency objectives. Empirical evaluation shows improvements of 2.0% in relevance and 4.5% in rebuttal success rate over strong baselines, demonstrating superior cross-domain generalization capability.
📝 Abstract
The proliferation of hate speech (HS) on social media poses a serious threat to societal security. Automatic counter narrative (CN) generation, as an active strategy for HS intervention, has garnered increasing attention in recent years. Existing methods for automatically generating CNs mainly rely on re-training or fine-tuning pre-trained language models (PLMs) on human-curated CN corpora. Unfortunately, the annotation speed of CN corpora cannot keep up with the growth of HS targets, while generating specific and effective CNs for unseen targets remains a significant challenge for the model. To tackle this issue, we propose Retrieval-Augmented Zero-shot Generation (ReZG) to generate CNs with high-specificity for unseen targets. Specifically, we propose a multi-dimensional hierarchical retrieval method that integrates stance, semantics, and fitness, extending the retrieval metric from single dimension to multiple dimensions suitable for the knowledge that refutes HS. Then, we implement an energy-based constrained decoding mechanism that enables PLMs to use differentiable knowledge preservation, countering, and fluency constraint functions instead of in-target CNs as control signals for generation, thereby achieving zero-shot CN generation. With the above techniques, ReZG can integrate external knowledge flexibly and improve the specificity of CNs. Experimental results show that ReZG exhibits stronger generalization capabilities and outperforms strong baselines with significant improvements of 2.0%+ in the relevance and 4.5%+ in the countering success rate metrics.