Zooming into Comics: Region-Aware RL Improves Fine-Grained Comic Understanding in Vision-Language Models

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) exhibit limited capability in comic understanding—particularly in interpreting stylized line art, onomatopoeia, and multi-panel layouts with complex spatial structures. To systematically diagnose these limitations, we introduce AI4VA-FG, the first fine-grained benchmark for visual narrative parsing, encompassing entity recognition, character reasoning, and story construction across multiple granularities. To enhance VLMs’ modeling capacity, we propose Region-Aware Reinforcement Learning (RARL), a novel training paradigm that dynamically attends to semantically critical image regions while integrating “vision-text dual-thought” reasoning. We apply RARL via post-training optimization—combining supervised fine-tuning and reinforcement learning—on models including Qwen2.5-VL. Experiments demonstrate significant improvements on AI4VA-FG: RARL boosts low-level visual recognition accuracy and high-level plot sequence ordering performance, marking substantive progress toward structured visual narrative understanding in VLMs.

Technology Category

Application Category

📝 Abstract
Complex visual narratives, such as comics, present a significant challenge to Vision-Language Models (VLMs). Despite excelling on natural images, VLMs often struggle with stylized line art, onomatopoeia, and densely packed multi-panel layouts. To address this gap, we introduce AI4VA-FG, the first fine-grained and comprehensive benchmark for VLM-based comic understanding. It spans tasks from foundational recognition and detection to high-level character reasoning and narrative construction, supported by dense annotations for characters, poses, and depth. Beyond that, we evaluate state-of-the-art proprietary models, including GPT-4o and Gemini-2.5, and open-source models such as Qwen2.5-VL, revealing substantial performance deficits across core tasks of our benchmarks and underscoring that comic understanding remains an unsolved challenge. To enhance VLMs'capabilities in this domain, we systematically investigate post-training strategies, including supervised fine-tuning on solutions (SFT-S), supervised fine-tuning on reasoning trajectories (SFT-R), and reinforcement learning (RL). Beyond that, inspired by the emerging"Thinking with Images"paradigm, we propose Region-Aware Reinforcement Learning (RARL) for VLMs, which trains models to dynamically attend to relevant regions through zoom-in operations. We observe that when applied to the Qwen2.5-VL model, RL and RARL yield significant gains in low-level entity recognition and high-level storyline ordering, paving the way for more accurate and efficient VLM applications in the comics domain.
Problem

Research questions and friction points this paper is trying to address.

VLMs struggle with stylized comic art and dense layouts
Existing models show performance gaps in fine-grained comic understanding
Current methods lack dynamic region attention for comic narratives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Region-Aware Reinforcement Learning for dynamic zoom-in
Reinforcement Learning improves entity recognition and ordering
Supervised fine-tuning on solutions and reasoning trajectories
🔎 Similar Papers
No similar papers found.