Learning to Focus and Precise Cropping: A Reinforcement Learning Framework with Information Gaps and Grounding Loss for MLLMs

📅 2026-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tendency of multimodal large language models to over-rely on global image context in complex visual scenes, which hinders their ability to perceive fine-grained details within cropped regions. To mitigate this limitation, the authors propose a two-stage reinforcement learning framework that operates without trajectory supervision. In the first stage, an “information gap” mechanism dynamically modulates the granularity of global image representation to guide the model toward task-relevant regions. In the second stage, a localization loss—leveraging only a small number of bounding box annotations—is introduced to refine cropping precision. By integrating the information gap mechanism and localization loss into a two-stage reinforcement learning paradigm, this approach significantly enhances the model’s perception and reasoning capabilities regarding local details, achieving state-of-the-art performance on high-resolution visual question answering benchmarks.
📝 Abstract
To enhance the perception and reasoning capabilities of multimodal large language models in complex visual scenes, recent research has introduced agent-based workflows. In these works, MLLMs autonomously utilize image cropping tool to analyze regions of interest for question answering. While existing training strategies, such as those employing supervised fine-tuning and reinforcement learning, have made significant progress, our empirical analysis reveals a key limitation. We demonstrate the model's strong reliance on global input and its weak dependence on the details within the cropped region. To address this issue, we propose a novel two-stage reinforcement learning framework that does not require trajectory supervision. In the first stage, we introduce the ``Information Gap" mechanism by adjusting the granularity of the global image. This mechanism trains the model to answer questions by focusing on cropped key regions, driven by the information gain these regions provide. The second stage further enhances cropping precision by incorporating a grounding loss, using a small number of bounding box annotations. Experiments show that our method significantly enhances the model's attention to cropped regions, enabling it to achieve state-of-the-art performance on high-resolution visual question-answering benchmarks. Our method provides a more efficient approach for perceiving and reasoning fine-grained details in MLLMs. Code is available at: https://github.com/XuanPu-Z/LFPC.
Problem

Research questions and friction points this paper is trying to address.

multimodal large language models
image cropping
visual question answering
information gap
grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Information Gap
Grounding Loss
Reinforcement Learning
Precise Cropping
Multimodal Large Language Models
🔎 Similar Papers
No similar papers found.
X
Xuanpu Zhao
School of Cyber Science and Technology, University of Science and Technology of China; Anhui Province Key Laboratory of Digital Security
Z
Zhentao Tan
Independent Researcher
D
Dianmo Sheng
School of Cyber Science and Technology, University of Science and Technology of China; Anhui Province Key Laboratory of Digital Security
Tianxiang Chen
Tianxiang Chen
University of Science and Technology of China
MLLMLLMMedical AISegmentation
Yao Liu
Yao Liu
Unknown affiliation
Chemistrynanotechnologyphotochemistrysolar cellphotoelectrochemistry
Y
Yue Wu
Independent Researcher
Tao Gong
Tao Gong
University of Science and Technology of China
Computer VisionMachine Learning
Qi Chu
Qi Chu
University of Science and Technology of China
Computer visionArtificial intelligence security
Nenghai Yu
Nenghai Yu
University of Science and Technology of China
Computer VisionArtificial IntelligenceInformation Hiding