🤖 AI Summary
This work addresses the challenge that multimodal large language models often lose fine-grained visual details due to global image encoding, hindering their ability to perceive small objects or subtle markers. To overcome this limitation, we propose TikArt—an aperture-guided agent that frames reasoning as a sequential decision-making process over regions of interest through a “Think-Aperture-Observe” loop. TikArt alternates between language generation and learnable zooming/SAM2 segmentation operations, converting localized visual cues into persistent linguistic memory. Built upon the Qwen3-VL-8B architecture and trained with the AGRPO reinforcement learning algorithm, TikArt is the first to integrate learnable aperture operations into multi-step reasoning. Using a two-stage curriculum strategy, it achieves state-of-the-art performance across multiple benchmarks—including V*, HR-Bench-4K/8K, MME-RealWorld-Lite, MMStar, RefCOCO, and ReasonSeg—while producing interpretable action trajectories.
📝 Abstract
We address fine-grained visual reasoning in multimodal large language models (MLLMs), where key evidence may reside in tiny objects, cluttered regions, or subtle markings that are lost under a single global image encoding. We introduce TikArt (Thinking Aperture), an aperture-guided agent that casts multi-step vision-language reasoning as a decision process over regions of interest. TikArt follows a Think-Aperture-Observe loop, alternating between language generation and two aperture actions: Zoom extracts rectangular crops, while Segment invokes SAM2 to obtain mask-based crops for irregular targets. After every action, the model must produce an explicit observation, turning local visual cues into persistent linguistic memory. Built on Qwen3-VL-8B, TikArt optimizes its reasoning policy with AGRPO, a GRPO-style reinforcement learning algorithm with a two-stage curriculum: it warms up segmentation actions and then jointly optimizes visual math, fine-grained VQA, and segmentation, using rewards that couple task success with purposeful aperture use. Experiments on V*, HR-Bench-4K/8K, MME-RealWorld-Lite, MMStar, RefCOCO, and ReasonSeg show consistent gains over the backbone and yield interpretable aperture trajectories for high-resolution reasoning.