🤖 AI Summary
This work addresses the challenge of detecting multimodal fake news in short videos, where individual modalities appear plausible yet exhibit subtle cross-modal inconsistencies. To tackle this, the authors propose MAGIC3, a novel framework that explicitly models multi-level consistency among text, visual, and audio modalities. By integrating a cross-modal attention mechanism to extract fine-grained alignment features, and enhancing robustness through an uncertainty-aware classifier and multi-style large language model rewriting, MAGIC3 achieves high detection accuracy. Furthermore, a selective vision-language model routing strategy enables the system to match the performance of state-of-the-art vision-language models on the FakeSV and FakeTT datasets while improving inference throughput by 18–27× and reducing GPU memory usage by 93%.
📝 Abstract
Short-form video platforms are major channels for news but also fertile ground for multimodal misinformation where each modality appears plausible alone yet cross-modal relationships are subtly inconsistent, like mismatched visuals and captions. On two benchmark datasets, FakeSV (Chinese) and FakeTT (English), we observe a clear asymmetry: real videos exhibit high text-visual but moderate text-audio consistency, while fake videos show the opposite pattern. Moreover, a single global consistency score forms an interpretable axis along which fake probability and prediction errors vary smoothly. Motivated by these observations, we present MAGIC3 (Modal-Adversarial Gated Interaction and Consistency-Centric Classifier), a detector that explicitly models and exposes cross-tri-modal consistency signals at multiple granularities. MAGIC3 combines explicit pairwise and global consistency modeling with token- and frame-level consistency signals derived from cross-modal attention, incorporates multi-style LLM rewrites to obtain style-robust text representations, and employs an uncertainty-aware classifier for selective VLM routing. Using pre-extracted features, MAGIC3 consistently outperforms the strongest non-VLM baselines on FakeSV and FakeTT. While matching VLM-level accuracy, the two-stage system achieves 18-27x higher throughput and 93% VRAM savings, offering a strong cost-performance tradeoff.