SarcasmMiner: A Dual-Track Post-Training Framework for Robust Audio-Visual Sarcasm Reasoning

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of multimodal sarcasm detection, where pragmatic inconsistencies among textual, acoustic, and visual cues often lead existing models to generate hallucinations during cross-modal reasoning. To mitigate this, the task is reframed as a structured reasoning problem, and a reinforcement learning–based post-training framework is proposed, featuring a dual-track distillation mechanism. This mechanism leverages high-quality teacher trajectories to initialize the student model and employs full-trajectory training to construct a generative reward model (GenRM). By integrating Group Relative Policy Optimization (GRPO) with fine-tuning of multimodal foundation models, the approach significantly enhances reasoning robustness and alignment. Evaluated on MUStARD++, the method achieves an F1 score of 70.22%, surpassing both zero-shot (59.83%) and supervised fine-tuning (68.23%) baselines, thereby demonstrating its effectiveness.

Technology Category

Application Category

📝 Abstract
Multimodal sarcasm detection requires resolving pragmatic incongruity across textual, acoustic, and visual cues through cross-modal reasoning. To enable robust sarcasm reasoning with foundation models, we propose SarcasmMiner, a reinforcement learning based post-training framework that resists hallucination in multimodal reasoning. We reformulate sarcasm detection as structured reasoning and adopt a dual-track distillation strategy: high-quality teacher trajectories initialize the student model, while the full set of trajectories trains a generative reward model (GenRM) to evaluate reasoning quality. The student is optimized with group relative policy optimization (GRPO) using decoupled rewards for accuracy and reasoning quality. On MUStARD++, SarcasmMiner increases F1 from 59.83% (zero-shot), 68.23% (supervised finetuning) to 70.22%. These findings suggest that reasoning-aware reward modeling enhances both performance and multimodal grounding.
Problem

Research questions and friction points this paper is trying to address.

multimodal sarcasm detection
pragmatic incongruity
cross-modal reasoning
audio-visual sarcasm
foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal sarcasm detection
structured reasoning
dual-track distillation
generative reward model
group relative policy optimization