๐ค AI Summary
Current vision-language models struggle to accurately interpret sarcastic images, primarily due to excessive coupling between local entity relationships and global semantic contextโleading to misinterpretation, bias, and hallucination. To address this, we propose a training-free multi-agent framework featuring a novel vision-cascaded disentanglement mechanism that explicitly decouples local object relations from global scene semantics. Furthermore, we introduce an uncertainty-aware chain-of-thought reasoning strategy that iteratively validates and refines intermediate inferences. By integrating fine-grained semantic decomposition with dynamic confidence-guided reasoning, our approach significantly mitigates hallucination. Evaluated on multiple sarcastic image understanding benchmarks, the framework substantially outperforms state-of-the-art methods: accuracy improves markedly, and hallucination rates decrease by 32%โ47%. These results demonstrate its effectiveness and generalizability for complex visual-semantic parsing tasks.
๐ Abstract
Satire, a form of artistic expression combining humor with implicit critique, holds significant social value by illuminating societal issues. Despite its cultural and societal significance, satire comprehension, particularly in purely visual forms, remains a challenging task for current vision-language models. This task requires not only detecting satire but also deciphering its nuanced meaning and identifying the implicated entities. Existing models often fail to effectively integrate local entity relationships with global context, leading to misinterpretation, comprehension biases, and hallucinations. To address these limitations, we propose SatireDecoder, a training-free framework designed to enhance satirical image comprehension. Our approach proposes a multi-agent system performing visual cascaded decoupling to decompose images into fine-grained local and global semantic representations. In addition, we introduce a chain-of-thought reasoning strategy guided by uncertainty analysis, which breaks down the complex satire comprehension process into sequential subtasks with minimized uncertainty. Our method significantly improves interpretive accuracy while reducing hallucinations. Experimental results validate that SatireDecoder outperforms existing baselines in comprehending visual satire, offering a promising direction for vision-language reasoning in nuanced, high-level semantic tasks.