Agentic Jigsaw Interaction Learning for Enhancing Visual Perception and Reasoning in Vision-Language Models

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large vision-language models (VLMs) perform near-randomly on fundamental perceptual reasoning tasks—such as jigsaw puzzle solving—revealing profound limitations in their multimodal understanding. To address this, we propose AGILE, the first framework that formulates jigsaw solving as a code-driven, interactive reinforcement learning process: an agent generates executable action code, receives fine-grained visual feedback, and dynamically updates its internal environment state, thereby closing the perception–reasoning–action loop. This paradigm mitigates key bottlenecks in multimodal RL—namely, data scarcity and poor scalability. Experiments demonstrate that AGILE boosts accuracy on 2×2 jigsaw puzzles from 9.5% to 82.8%, while achieving an average +3.1% improvement across nine general vision benchmarks, significantly enhancing out-of-distribution generalization.

Technology Category

Application Category

📝 Abstract
Although current large Vision-Language Models (VLMs) have advanced in multimodal understanding and reasoning, their fundamental perceptual and reasoning abilities remain limited. Specifically, even on simple jigsaw tasks, existing VLMs perform near randomly, revealing deficiencies in core perception and reasoning capabilities. While high-quality vision-language data can enhance these capabilities, its scarcity and limited scalability impose significant constraints. To address this, we propose AGILE, an Agentic jiGsaw Interaction Learning for Enhancing visual perception and reasoning in VLMs. AGILE formulates jigsaw solving as an interactive process, enabling the model to progressively engage with the environment. At each step, the model generates executable code to perform an action based on the current state, while the environment provides fine-grained visual feedback to guide task completion. Through this iterative cycle of observation and interaction, the model incrementally improves its perceptual and reasoning capabilities via exploration and feedback. Experimental results show that AGILE not only substantially boosts performance on jigsaw tasks of varying complexity (e.g., increasing accuracy from 9.5% to 82.8% under the 2 $ imes$ 2 setting) but also demonstrates strong generalization across 9 general vision tasks, achieving an average improvement of 3.1%. These results indicate notable enhancements in both perceptual and reasoning abilities. This work opens a new avenue for advancing reasoning and generalization in multimodal models and provides an efficient, scalable solution to the scarcity of multimodal reinforcement learning data. The code and datasets is available at https://github.com/yuzeng0-0/AGILE .
Problem

Research questions and friction points this paper is trying to address.

Enhancing visual perception and reasoning in Vision-Language Models
Addressing limited capabilities in multimodal understanding and reasoning
Solving jigsaw tasks through interactive learning and feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

AGILE uses interactive jigsaw solving for model learning
Model generates executable code for environment actions
Iterative feedback cycle enhances perception and reasoning
🔎 Similar Papers
No similar papers found.
Y
Yu Zeng
University of Science and Technology of China
Wenxuan Huang
Wenxuan Huang
CUHK & ECNU
Artificial General IntelligenceMLLMLLMAIGCModel Acceleration
Shiting Huang
Shiting Huang
University of Glasgow
BiomechanicsComputational modelinglung resectionright ventricle
X
Xikun Bao
University of Science and Technology of China
Yukun Qi
Yukun Qi
中国科学技术大学
Y
Yiming Zhao
University of Science and Technology of China
Qiuchen Wang
Qiuchen Wang
University of Science and Technology of China
Computer VisionLarge Language Model
L
Lin Chen
University of Science and Technology of China
Zehui Chen
Zehui Chen
USTC
H
Huaian Chen
University of Science and Technology of China
W
Wanli Ouyang
Shanghai AI Laboratory, The Chinese University of Hong Kong
F
Feng Zhao
University of Science and Technology of China