Tailored Teaching with Balanced Difficulty: Elevating Reasoning in Multimodal Chain-of-Thought via Prompt Curriculum

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal chain-of-thought (MCoT) prompting methods rely on random or manual example selection, ignoring both the model’s knowledge distribution and the intrinsic complexity of tasks—leading to unstable performance. To address this, we propose the first prompting curriculum framework for multimodal reasoning, inspired by the pedagogical principles of “teaching according to individual aptitude” and “difficulty balancing.” Our method constructs an ordered, model-capability-aligned sequence of demonstration examples. We introduce a novel dual-dimensional difficulty assessment: (i) model-perceived difficulty, quantified via prediction disagreement measured through active learning; and (ii) intrinsic difficulty, modeled from semantic and visual complexity of question–image pairs. Difficulty-balanced sampling then generates a progressive prompting curriculum. Evaluated across five mainstream multimodal benchmarks and multiple large multimodal models, our approach significantly improves reasoning accuracy and reduces performance variance, demonstrating strong robustness and generalizability.

Technology Category

Application Category

📝 Abstract
The effectiveness of Multimodal Chain-of-Thought (MCoT) prompting is often limited by the use of randomly or manually selected examples. These examples fail to account for both model-specific knowledge distributions and the intrinsic complexity of the tasks, resulting in suboptimal and unstable model performance. To address this, we propose a novel framework inspired by the pedagogical principle of "tailored teaching with balanced difficulty". We reframe prompt selection as a prompt curriculum design problem: constructing a well ordered set of training examples that align with the model's current capabilities. Our approach integrates two complementary signals: (1) model-perceived difficulty, quantified through prediction disagreement in an active learning setup, capturing what the model itself finds challenging; and (2) intrinsic sample complexity, which measures the inherent difficulty of each question-image pair independently of any model. By jointly analyzing these signals, we develop a difficulty-balanced sampling strategy that ensures the selected prompt examples are diverse across both dimensions. Extensive experiments conducted on five challenging benchmarks and multiple popular Multimodal Large Language Models (MLLMs) demonstrate that our method yields substantial and consistent improvements and greatly reduces performance discrepancies caused by random sampling, providing a principled and robust approach for enhancing multimodal reasoning.
Problem

Research questions and friction points this paper is trying to address.

Optimizing multimodal reasoning by selecting tailored examples
Addressing suboptimal performance from random prompt selection
Balancing model-perceived and intrinsic difficulty in examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates model-perceived and intrinsic difficulty metrics
Develops difficulty-balanced sampling strategy for prompts
Constructs ordered prompt curriculum aligned with model capabilities
🔎 Similar Papers
No similar papers found.
X
Xinglong Yang
MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics
Q
Quan Feng
Hunan Vanguard Group Corporation Company Limited
Z
Zhongying Pan
Huaneng Information Technology Co., Ltd.
X
Xiang Chen
MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics
Y
Yu Tian
Tsinghua University
Wentong Li
Wentong Li
Nanjing University of Aeronautics and Astronautics
Computer VisionMachine LearningVision-Language ModelRobotics
Shuofei Qiao
Shuofei Qiao
Zhejiang University
AI AgentLarge Language ModelsNatural Language ProcessingKnowledge Graphs
Yuxia Geng
Yuxia Geng
Zhejiang University, Hangzhou Dianzi University, PowerChina Huadong Engineering Corporation Limited
Knowledge GraphLarge Language ModelIndustry Application
Xingyu Zhao
Xingyu Zhao
Associate Professor, University of Warwick
Software ReliabilitySafe AIBayesian InferenceProbabilistic Model CheckingSafety Assurance
Sheng-Jun Huang
Sheng-Jun Huang
Nanjing University of Aeronautics & Astronautics
Machine Learning