MORE-R1: Guiding LVLM for Multimodal Object-Entity Relation Extraction via Stepwise Reasoning with Reinforcement Learning

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing methods for multimodal object-entity relation extraction—namely, their lack of explicit reasoning, poor scalability, and opaque inference processes—by proposing a step-by-step reasoning framework grounded in large vision-language models. The approach integrates supervised fine-tuning with Group Relative Policy Optimization (GRPO), a reinforcement learning algorithm, through a two-stage training paradigm: first, a cold-start phase learns fine-grained reasoning paths, followed by a refinement phase that enhances reasoning on complex samples. Innovatively combining explicit stepwise reasoning with reinforcement learning, the method introduces an automated pipeline for constructing high-quality reasoning data and employs a progressive sample mixing strategy to improve training stability and performance. Evaluated on the MORE benchmark, the proposed framework substantially outperforms prior approaches, achieving state-of-the-art results.

Technology Category

Application Category

📝 Abstract
Multimodal Object-Entity Relation Extraction (MORE) is a challenging task in information extraction research. It aims to identify relations between visual objects and textual entities, requiring complex multimodal understanding and cross-modal reasoning abilities. Existing methods, mainly classification-based or generation-based without reasoning, struggle to handle complex extraction scenarios in the MORE task and suffer from limited scalability and intermediate reasoning transparency. To address these challenges, we propose MORE-R1, a novel model that introduces explicit stepwise reasoning with Reinforcement Learning (RL) to enable Large Vision-Language Model (LVLM) to address the MORE task effectively. MORE-R1 integrates a two-stage training process, including an initial cold-start training stage with Supervised Fine-Tuning (SFT) and a subsequent RL stage for reasoning ability optimization. In the initial stage, we design an efficient way to automatically construct a high-quality SFT dataset containing fine-grained stepwise reasoning tailored to the MORE task, enabling the model to learn an effective reasoning paradigm. In the subsequent stage, we employ the Group Relative Policy Optimization (GRPO) RL algorithm with a Progressive Sample-Mixing Strategy to stabilize training and further enhance model's reasoning ability on hard samples. Comprehensive experiments on the MORE benchmark demonstrate that MORE-R1 achieves state-of-the-art performance with significant improvement over baselines.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Object-Entity Relation Extraction
Cross-modal Reasoning
Stepwise Reasoning
Large Vision-Language Model
Information Extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stepwise Reasoning
Reinforcement Learning
Large Vision-Language Model
Multimodal Relation Extraction
Group Relative Policy Optimization
X
Xiang Yuan
School of Software and Microelectronics, Peking University, Beijing, China
Xu Chu
Xu Chu
Peking University
Machine learningData mining
X
Xinrong Chen
School of Software and Microelectronics, Peking University, Beijing, China
Haochen Li
Haochen Li
Tsinghua university
cell-cell communicationsingle-cell genomicsspatial transcriptomics
Z
Zonghong Dai
AlignBase, Beijing, China
H
Hongcheng Fan
AlignBase, Beijing, China
X
Xiaoyue Yuan
Information Application Research Center, Shanghai Municipal Administration for Market Regulation, Shanghai, China
W
Weiping Li
School of Software and Microelectronics, Peking University, Beijing, China
Tong Mo
Tong Mo
AI Research Engineer at Huawei Canada
Reinforcement LearningKeywork Spotting