🤖 AI Summary
This work addresses the limitations of existing methods for multimodal object-entity relation extraction—namely, their lack of explicit reasoning, poor scalability, and opaque inference processes—by proposing a step-by-step reasoning framework grounded in large vision-language models. The approach integrates supervised fine-tuning with Group Relative Policy Optimization (GRPO), a reinforcement learning algorithm, through a two-stage training paradigm: first, a cold-start phase learns fine-grained reasoning paths, followed by a refinement phase that enhances reasoning on complex samples. Innovatively combining explicit stepwise reasoning with reinforcement learning, the method introduces an automated pipeline for constructing high-quality reasoning data and employs a progressive sample mixing strategy to improve training stability and performance. Evaluated on the MORE benchmark, the proposed framework substantially outperforms prior approaches, achieving state-of-the-art results.
📝 Abstract
Multimodal Object-Entity Relation Extraction (MORE) is a challenging task in information extraction research. It aims to identify relations between visual objects and textual entities, requiring complex multimodal understanding and cross-modal reasoning abilities. Existing methods, mainly classification-based or generation-based without reasoning, struggle to handle complex extraction scenarios in the MORE task and suffer from limited scalability and intermediate reasoning transparency. To address these challenges, we propose MORE-R1, a novel model that introduces explicit stepwise reasoning with Reinforcement Learning (RL) to enable Large Vision-Language Model (LVLM) to address the MORE task effectively. MORE-R1 integrates a two-stage training process, including an initial cold-start training stage with Supervised Fine-Tuning (SFT) and a subsequent RL stage for reasoning ability optimization. In the initial stage, we design an efficient way to automatically construct a high-quality SFT dataset containing fine-grained stepwise reasoning tailored to the MORE task, enabling the model to learn an effective reasoning paradigm. In the subsequent stage, we employ the Group Relative Policy Optimization (GRPO) RL algorithm with a Progressive Sample-Mixing Strategy to stabilize training and further enhance model's reasoning ability on hard samples. Comprehensive experiments on the MORE benchmark demonstrate that MORE-R1 achieves state-of-the-art performance with significant improvement over baselines.