π€ AI Summary
This work addresses the challenge in multimodal sarcasm understanding where existing methods struggle to jointly perform detection and explanation while neglecting their causal interdependence. To this end, we propose MuVaC, a novel framework that, for the first time, integrates structural causal models into this task. MuVaC explicitly models the causal relationship between detection and explanation through variational causal pathways and enhances multimodal representations via an βalign-then-fuseβ strategy. By incorporating consistency constraints and joint optimization, the framework significantly improves both inference reliability and overall performance. Experimental results on public benchmarks demonstrate that MuVaC outperforms state-of-the-art approaches, achieving more accurate sarcasm detection alongside more interpretable explanations.
π Abstract
The prevalence of sarcasm in multimodal dialogues on the social platforms presents a crucial yet challenging task for understanding the true intent behind online content. Comprehensive sarcasm analysis requires two key aspects: Multimodal Sarcasm Detection (MSD) and Multimodal Sarcasm Explanation (MuSE). Intuitively, the act of detection is the result of the reasoning process that explains the sarcasm. Current research predominantly focuses on addressing either MSD or MuSE as a single task. Even though some recent work has attempted to integrate these tasks, their inherent causal dependency is often overlooked. To bridge this gap, we propose MuVaC, a variational causal inference framework that mimics human cognitive mechanisms for understanding sarcasm, enabling robust multimodal feature learning to jointly optimize MSD and MuSE. Specifically, we first model MSD and MuSE from the perspective of structural causal models, establishing variational causal pathways to define the objectives for joint optimization. Next, we design an alignment-then-fusion approach to integrate multimodal features, providing robust fusion representations for sarcasm detection and explanation generation. Finally, we enhance the reasoning trustworthiness by ensuring consistency between detection results and explanations. Experimental results demonstrate the superiority of MuVaC in public datasets, offering a new perspective for understanding multimodal sarcasm.