🤖 AI Summary
This work addresses the tendency of multimodal large language models to over-rely on global image context in complex visual scenes, which hinders their ability to perceive fine-grained details within cropped regions. To mitigate this limitation, the authors propose a two-stage reinforcement learning framework that operates without trajectory supervision. In the first stage, an “information gap” mechanism dynamically modulates the granularity of global image representation to guide the model toward task-relevant regions. In the second stage, a localization loss—leveraging only a small number of bounding box annotations—is introduced to refine cropping precision. By integrating the information gap mechanism and localization loss into a two-stage reinforcement learning paradigm, this approach significantly enhances the model’s perception and reasoning capabilities regarding local details, achieving state-of-the-art performance on high-resolution visual question answering benchmarks.
📝 Abstract
To enhance the perception and reasoning capabilities of multimodal large language models in complex visual scenes, recent research has introduced agent-based workflows. In these works, MLLMs autonomously utilize image cropping tool to analyze regions of interest for question answering. While existing training strategies, such as those employing supervised fine-tuning and reinforcement learning, have made significant progress, our empirical analysis reveals a key limitation. We demonstrate the model's strong reliance on global input and its weak dependence on the details within the cropped region. To address this issue, we propose a novel two-stage reinforcement learning framework that does not require trajectory supervision. In the first stage, we introduce the ``Information Gap" mechanism by adjusting the granularity of the global image. This mechanism trains the model to answer questions by focusing on cropped key regions, driven by the information gain these regions provide. The second stage further enhances cropping precision by incorporating a grounding loss, using a small number of bounding box annotations. Experiments show that our method significantly enhances the model's attention to cropped regions, enabling it to achieve state-of-the-art performance on high-resolution visual question-answering benchmarks. Our method provides a more efficient approach for perceiving and reasoning fine-grained details in MLLMs. Code is available at: https://github.com/XuanPu-Z/LFPC.