Towards Faithful Reasoning in Comics for Small MLLMs

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges faced by small-scale multimodal large language models in comic visual question answering, particularly in handling symbolic abstraction, narrative logic, and humor comprehension, where standard chain-of-thought methods often suffer from state entanglement and unproductive exploration. To mitigate these issues, the authors propose a modular chain-of-thought reasoning framework integrated with GRPO-based reinforcement fine-tuning and a structured reward mechanism, enhancing reasoning faithfulness and transferability under resource constraints. Evaluated across five benchmark datasets for comic and humorous visual reasoning, the approach significantly outperforms existing methods: a 3B-parameter model surpasses current state-of-the-art results, and when deployed as a plug-in module across diverse multimodal large language models, it yields an average performance improvement of 12.1%.

Technology Category

Application Category

📝 Abstract
Comic-based visual question answering (CVQA) poses distinct challenges to multimodal large language models (MLLMs) due to its reliance on symbolic abstraction, narrative logic, and humor, which differ from conventional VQA tasks. Although Chain-of-Thought (CoT) prompting is widely used to enhance MLLM reasoning, surprisingly, its direct application to CVQA often degrades performance, especially in small-scale models. Our theoretical and empirical analyses reveal that standard CoT in CVQA suffers from state entanglement, spurious transitions, and exploration inefficiency, with small models particularly vulnerable in resource-constrained settings. To address these issues, we propose a novel comic reasoning framework, designed to produce more faithful and transferable reasoning chains in small MLLMs. Specifically, our framework combines modular CoT generation with GRPO-based reinforcement fine-tuning and a novel structured reward. Beyond comic VQA, we further evaluate our approach on a broader class of humor-centric and abstract visual reasoning tasks, including meme understanding and editorial cartoon interpretation. Across five challenging benchmarks, our 3B model outperforms state-of-the-art methods, and plug-in experiments yield an additional average improvement of $\mathbf{12.1\%}$ across different MLLMs.
Problem

Research questions and friction points this paper is trying to address.

comic-based visual question answering
multimodal large language models
Chain-of-Thought
symbolic abstraction
humor understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

modular Chain-of-Thought
GRPO-based reinforcement fine-tuning
structured reward
comic VQA
small MLLMs
🔎 Similar Papers
No similar papers found.