STAR-R1: Spacial TrAnsformation Reasoning by Reinforcing Multimodal LLMs

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor reasoning coherence and low exploration efficiency of multimodal large language models (MLLMs) on the cross-view object spatial transformation reasoning (TVR) task, this paper proposes the first single-stage reinforcement learning (RL) framework. Our method introduces a fine-grained reward function that explicitly models partial correctness, penalizes redundant enumeration, and suppresses passive non-responses—enabling, for the first time, human-like spatial comparison behavior. The framework jointly integrates MLLMs, TVR task formalization, and sparse-reward RL, eliminating the need for multi-stage training or costly human-annotated demonstration trajectories. Evaluated on 11 metrics, our approach achieves state-of-the-art (SOTA) performance across the board: cross-view accuracy improves by 23% over supervised fine-tuning, with significant gains in spatial relation identification and multi-object collaborative comparison capabilities.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities across diverse tasks, yet they lag significantly behind humans in spatial reasoning. We investigate this gap through Transformation-Driven Visual Reasoning (TVR), a challenging task requiring identification of object transformations across images under varying viewpoints. While traditional Supervised Fine-Tuning (SFT) fails to generate coherent reasoning paths in cross-view settings, sparse-reward Reinforcement Learning (RL) suffers from inefficient exploration and slow convergence. To address these limitations, we propose STAR-R1, a novel framework that integrates a single-stage RL paradigm with a fine-grained reward mechanism tailored for TVR. Specifically, STAR-R1 rewards partial correctness while penalizing excessive enumeration and passive inaction, enabling efficient exploration and precise reasoning. Comprehensive evaluations demonstrate that STAR-R1 achieves state-of-the-art performance across all 11 metrics, outperforming SFT by 23% in cross-view scenarios. Further analysis reveals STAR-R1's anthropomorphic behavior and highlights its unique ability to compare all objects for improving spatial reasoning. Our work provides critical insights in advancing the research of MLLMs and reasoning models. The codes, model weights, and data will be publicly available at https://github.com/zongzhao23/STAR-R1.
Problem

Research questions and friction points this paper is trying to address.

Addressing MLLMs' spatial reasoning gap vs humans
Improving transformation identification in cross-view images
Overcoming inefficient RL exploration in visual reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates single-stage RL with fine-grained rewards
Rewards partial correctness, penalizes excessive enumeration
Achieves state-of-the-art performance in spatial reasoning
🔎 Similar Papers
No similar papers found.
Z
Zongzhao Li
Gaoling School of Artificial Intelligence, Renmin University of China
Z
Zongyang Ma
MAIS, Institute of Automation, Chinese Academy of Sciences
M
Mingze Li
Gaoling School of Artificial Intelligence, Renmin University of China
Songyou Li
Songyou Li
Gaoling school of artificial intelligence, Renmin University of China
AI for Science
Y
Yu Rong
DAMO Academy, Alibaba Group, Hangzhou, China; Hupan Lab, Hangzhou, China
Tingyang Xu
Tingyang Xu
Alibaba DAMO Academy
Machine LearningDeep Graph LearningDrug Discovery
Z
Ziqi Zhang
MAIS, Institute of Automation, Chinese Academy of Sciences
Deli Zhao
Deli Zhao
Alibaba DAMO Academy
generative modelsmultimodal learningfoundation models
Wenbing Huang
Wenbing Huang
Associate Professor, Renmin University of China
Machine LearningAI for Science