VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Reinforcement Learning

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current visual RAG approaches suffer from two key bottlenecks: weak visual perception—relying on fixed pipelines without activating the underlying visual capabilities of foundation models—and poor query expressiveness—failing to capture fine-grained semantics in visually rich documents. To address these, we propose the first reinforcement learning–based, vision-perceptive RAG framework. Our method introduces a differentiable cropping-and-scaling action space tailored for visual inputs, driven by visual tokens to enable interactive, coarse-to-fine search and reasoning. We innovatively design a model-based reward function that jointly optimizes retrieval and reasoning by integrating query rewriting quality and retrieval effectiveness. Built upon PPO, vision-language models (VLMs), and a vision-guided token mechanism, our approach achieves significant improvements over state-of-the-art methods across multiple visual document QA and cross-modal retrieval benchmarks. The code is publicly available and supports real-world deployment.

Technology Category

Application Category

📝 Abstract
Effectively retrieving, reasoning and understanding visually rich information remains a challenge for RAG methods. Traditional text-based methods cannot handle visual-related information. On the other hand, current vision-based RAG approaches are often limited by fixed pipelines and frequently struggle to reason effectively due to the insufficient activation of the fundamental capabilities of models. As RL has been proven to be beneficial for model reasoning, we introduce VRAG-RL, a novel RL framework tailored for complex reasoning across visually rich information. With this framework, VLMs interact with search engines, autonomously sampling single-turn or multi-turn reasoning trajectories with the help of visual perception tokens and undergoing continual optimization based on these samples. Our approach highlights key limitations of RL in RAG domains: (i) Prior Multi-modal RAG approaches tend to merely incorporate images into the context, leading to insufficient reasoning token allocation and neglecting visual-specific perception; and (ii) When models interact with search engines, their queries often fail to retrieve relevant information due to the inability to articulate requirements, thereby leading to suboptimal performance. To address these challenges, we define an action space tailored for visually rich inputs, with actions including cropping and scaling, allowing the model to gather information from a coarse-to-fine perspective. Furthermore, to bridge the gap between users' original inquiries and the retriever, we employ a simple yet effective reward that integrates query rewriting and retrieval performance with a model-based reward. Our VRAG-RL optimizes VLMs for RAG tasks using specially designed RL strategies, aligning the model with real-world applications. The code is available at hyperlink{https://github.com/Alibaba-NLP/VRAG}{https://github.com/Alibaba-NLP/VRAG}.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RAG for visually rich information understanding
Overcoming fixed-pipeline limitations in vision-based RAG
Improving query articulation for better retrieval performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL framework for complex visual reasoning
Action space with cropping and scaling
Reward integrates query and retrieval performance
🔎 Similar Papers
No similar papers found.
Qiuchen Wang
Qiuchen Wang
University of Science and Technology of China
Computer VisionLarge Language Model
R
Ruixue Ding
Tongyi Lab, Alibaba Group
Y
Yu Zeng
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, USTC
Zehui Chen
Zehui Chen
USTC
L
Lin Chen
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, USTC
Shihang Wang
Shihang Wang
DAMO Academy, Alibaba Inc.
Natural Language Processing
Pengjun Xie
Pengjun Xie
Alibaba Group
NLP/IR/ML
F
Fei Huang
Tongyi Lab, Alibaba Group
F
Feng Zhao
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, USTC