ReMoD: Rethinking Modality Contribution in Multimodal Stance Detection via Dual Reasoning

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal stance detection (MSD) methods typically employ static modality fusion, overlooking heterogeneous contributions across modalities and thereby introducing stance misclassification noise. To address this, we propose a dual-inference-driven dynamic modality modeling framework. It comprises two sequential stages: intuition-driven initial inference and reflection-driven correction inference. These jointly construct a Modality Chain-of-Thought (Modality-CoT) pool—encoding historical modality performance—and a Semantic Chain-of-Thought (Semantic-CoT) pool—capturing context-aware semantic cues—enabling dynamic, context-sensitive modality weighting and bias correction. Our approach significantly enhances both interpretability and robustness of modality-specific contributions. Evaluated on the MMSD benchmark, the model consistently outperforms state-of-the-art baselines across all metrics, demonstrating superior generalization. This work establishes a novel paradigm for multimodal stance modeling grounded in adaptive, inference-aware modality integration.

Technology Category

Application Category

📝 Abstract
Multimodal Stance Detection (MSD) is a crucial task for understanding public opinion on social media. Existing work simply fuses information from various modalities to learn stance representations, overlooking the varying contributions of stance expression from different modalities. Therefore, stance misunderstanding noises may be drawn into the stance learning process due to the risk of learning errors by rough modality combination. To address this, we get inspiration from the dual-process theory of human cognition and propose **ReMoD**, a framework that **Re**thinks **Mo**dality contribution of stance expression through a **D**ual-reasoning paradigm. ReMoD integrates *experience-driven intuitive reasoning* to capture initial stance cues with *deliberate reflective reasoning* to adjust for modality biases, refine stance judgments, and thereby dynamically weight modality contributions based on their actual expressive power for the target stance. Specifically, the intuitive stage queries the Modality Experience Pool (MEP) and Semantic Experience Pool (SEP) to form an initial stance hypothesis, prioritizing historically impactful modalities. This hypothesis is then refined in the reflective stage via two reasoning chains: Modality-CoT updates MEP with adaptive fusion strategies to amplify relevant modalities, while Semantic-CoT refines SEP with deeper contextual insights of stance semantics. These dual experience structures are continuously refined during training and recalled at inference to guide robust and context-aware stance decisions. Extensive experiments on the public MMSD benchmark demonstrate that our ReMoD significantly outperforms most baseline models and exhibits strong generalization capabilities.
Problem

Research questions and friction points this paper is trying to address.

Addresses varying modality contributions in multimodal stance detection
Reduces stance misunderstanding from rough modality fusion approaches
Dynamically weights modality contributions based on expressive power
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-reasoning paradigm integrates intuitive and reflective processes
Dynamic modality weighting based on expressive power for stance
Continuous refinement of modality and semantic experience pools
🔎 Similar Papers
No similar papers found.
Bingbing Wang
Bingbing Wang
Harbin Institute of Technology, Shenzhen
natural language processing
Z
Zhengda Jin
Harbin Institute of Technology, Shenzhen, China
B
Bin Liang
Harbin Institute of Technology, Shenzhen, China
J
Jing Li
The Hong Kong Polytechnic University, Hong Kong, China
Ruifeng Xu
Ruifeng Xu
Professor, Harbin Institute of Technology at Shenzhen
Natural Language ProcessingAffective ComputingArgumentation MiningLLMsBioinformatics