🤖 AI Summary
Existing multimodal stance detection (MSD) methods typically employ static modality fusion, overlooking heterogeneous contributions across modalities and thereby introducing stance misclassification noise. To address this, we propose a dual-inference-driven dynamic modality modeling framework. It comprises two sequential stages: intuition-driven initial inference and reflection-driven correction inference. These jointly construct a Modality Chain-of-Thought (Modality-CoT) pool—encoding historical modality performance—and a Semantic Chain-of-Thought (Semantic-CoT) pool—capturing context-aware semantic cues—enabling dynamic, context-sensitive modality weighting and bias correction. Our approach significantly enhances both interpretability and robustness of modality-specific contributions. Evaluated on the MMSD benchmark, the model consistently outperforms state-of-the-art baselines across all metrics, demonstrating superior generalization. This work establishes a novel paradigm for multimodal stance modeling grounded in adaptive, inference-aware modality integration.
📝 Abstract
Multimodal Stance Detection (MSD) is a crucial task for understanding public opinion on social media. Existing work simply fuses information from various modalities to learn stance representations, overlooking the varying contributions of stance expression from different modalities. Therefore, stance misunderstanding noises may be drawn into the stance learning process due to the risk of learning errors by rough modality combination. To address this, we get inspiration from the dual-process theory of human cognition and propose **ReMoD**, a framework that **Re**thinks **Mo**dality contribution of stance expression through a **D**ual-reasoning paradigm. ReMoD integrates *experience-driven intuitive reasoning* to capture initial stance cues with *deliberate reflective reasoning* to adjust for modality biases, refine stance judgments, and thereby dynamically weight modality contributions based on their actual expressive power for the target stance. Specifically, the intuitive stage queries the Modality Experience Pool (MEP) and Semantic Experience Pool (SEP) to form an initial stance hypothesis, prioritizing historically impactful modalities. This hypothesis is then refined in the reflective stage via two reasoning chains: Modality-CoT updates MEP with adaptive fusion strategies to amplify relevant modalities, while Semantic-CoT refines SEP with deeper contextual insights of stance semantics. These dual experience structures are continuously refined during training and recalled at inference to guide robust and context-aware stance decisions. Extensive experiments on the public MMSD benchmark demonstrate that our ReMoD significantly outperforms most baseline models and exhibits strong generalization capabilities.