TikArt: Aperture-Guided Observation for Fine-Grained Visual Reasoning via Reinforcement Learning

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that multimodal large language models often lose fine-grained visual details due to global image encoding, hindering their ability to perceive small objects or subtle markers. To overcome this limitation, we propose TikArt—an aperture-guided agent that frames reasoning as a sequential decision-making process over regions of interest through a “Think-Aperture-Observe” loop. TikArt alternates between language generation and learnable zooming/SAM2 segmentation operations, converting localized visual cues into persistent linguistic memory. Built upon the Qwen3-VL-8B architecture and trained with the AGRPO reinforcement learning algorithm, TikArt is the first to integrate learnable aperture operations into multi-step reasoning. Using a two-stage curriculum strategy, it achieves state-of-the-art performance across multiple benchmarks—including V*, HR-Bench-4K/8K, MME-RealWorld-Lite, MMStar, RefCOCO, and ReasonSeg—while producing interpretable action trajectories.

Technology Category

Application Category

📝 Abstract
We address fine-grained visual reasoning in multimodal large language models (MLLMs), where key evidence may reside in tiny objects, cluttered regions, or subtle markings that are lost under a single global image encoding. We introduce TikArt (Thinking Aperture), an aperture-guided agent that casts multi-step vision-language reasoning as a decision process over regions of interest. TikArt follows a Think-Aperture-Observe loop, alternating between language generation and two aperture actions: Zoom extracts rectangular crops, while Segment invokes SAM2 to obtain mask-based crops for irregular targets. After every action, the model must produce an explicit observation, turning local visual cues into persistent linguistic memory. Built on Qwen3-VL-8B, TikArt optimizes its reasoning policy with AGRPO, a GRPO-style reinforcement learning algorithm with a two-stage curriculum: it warms up segmentation actions and then jointly optimizes visual math, fine-grained VQA, and segmentation, using rewards that couple task success with purposeful aperture use. Experiments on V*, HR-Bench-4K/8K, MME-RealWorld-Lite, MMStar, RefCOCO, and ReasonSeg show consistent gains over the backbone and yield interpretable aperture trajectories for high-resolution reasoning.
Problem

Research questions and friction points this paper is trying to address.

fine-grained visual reasoning
multimodal large language models
tiny objects
subtle markings
cluttered regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

aperture-guided reasoning
fine-grained visual reasoning
reinforcement learning
multimodal LLMs
segmentation-aware observation
🔎 Similar Papers
H
Hao Ding
Zhejiang University
Z
Zhichuan Yang
Xi’an Jiaotong University
W
Weijie Ge
Zhejiang University of Science and Technology
Z
Ziqin Gao
Zhejiang University
Chaoyi Lu
Chaoyi Lu
Zhongguancun Laboratory
network securityinternet measurement
Lei Zhao
Lei Zhao
Zhejiang University
多模态大模型、视频和三维内容生成(AIGC)、虚拟数字人