Truth in the Few: High-Value Data Selection for Efficient Multi-Modal Reasoning

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from high data redundancy and excessive computational costs during training. Method: This paper proposes a cognitive-value-driven data curation paradigm. We introduce Reasoning Activation Potential (RAP) as a novel metric to identify “cognitive samples,” and design a dual-path filtering mechanism comprising a Causal Difference Estimator (CDE) and an Attention Confidence Estimator (ACE), augmented by a Difficulty-aware Replacement Module (DRM). Our approach integrates causal inference via the potential outcomes framework, cross-modal and textual output contrastive analysis, token-level self-attention interpretability modeling, and dynamic difficulty enhancement. Contribution/Results: Using only 9.3% of the original training data, our method surpasses full-data training performance across six mainstream multimodal reasoning benchmarks, reduces computational overhead by over 43%, and significantly improves both training efficiency and generalization capability.

Technology Category

Application Category

📝 Abstract
While multi-modal large language models (MLLMs) have made significant progress in complex reasoning tasks via reinforcement learning, it is commonly believed that extensive training data is necessary for improving multi-modal reasoning ability, inevitably leading to data redundancy and substantial computational costs. However, can smaller high-value datasets match or outperform full corpora for multi-modal reasoning in MLLMs? In this work, we challenge this assumption through a key observation: meaningful multi-modal reasoning is triggered by only a sparse subset of training samples, termed cognitive samples, whereas the majority contribute marginally. Building on this insight, we propose a novel data selection paradigm termed Reasoning Activation Potential (RAP), which identifies cognitive samples by estimating each sample's potential to stimulate genuine multi-modal reasoning by two complementary estimators: 1) Causal Discrepancy Estimator (CDE) based on the potential outcome model principle, eliminates samples that overly rely on language priors by comparing outputs between multi-modal and text-only inputs; 2) Attention Confidence Estimator (ACE), which exploits token-level self-attention to discard samples dominated by irrelevant but over-emphasized tokens in intermediate reasoning stages. Moreover, we introduce a Difficulty-aware Replacement Module (DRM) to substitute trivial instances with cognitively challenging ones, thereby ensuring complexity for robust multi-modal reasoning. Experiments on six datasets show that our RAP method consistently achieves superior performance using only 9.3% of the training data, while reducing computational costs by over 43%. Our code is available at https://github.com/Leo-ssl/RAP.
Problem

Research questions and friction points this paper is trying to address.

Identifying high-value data for efficient multi-modal reasoning
Reducing computational costs by selecting sparse cognitive samples
Enhancing reasoning ability with minimal training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

RAP identifies high-value cognitive samples
Uses CDE and ACE for sample selection
DRM replaces trivial with challenging samples
🔎 Similar Papers
No similar papers found.
S
Shenshen Li
University of Electronic Science and Technology of China
K
Kaiyuan Deng
University of Electronic Science and Technology of China
L
Lei Wang
Salesforce AI Research
H
Hao Yang
Meituan
Chong Peng
Chong Peng
Qingdao University
机器学习、计算机视觉
Peng Yan
Peng Yan
Research Assistant of ZHAW, PhD student of UZH
Deep LearningTransfer LearningIntelligent Algorithm
F
Fumin Shen
University of Electronic Science and Technology of China
H
Heng Tao Shen
School of Computer Science and Technology, Tongji University
X
Xing Xu
University of Electronic Science and Technology of China, School of Computer Science and Technology, Tongji University