🤖 AI Summary
This work addresses the severe miscalibration problem in multimodal large language models (MLLMs), which often produce highly confident yet incorrect predictions. To tackle this, the authors propose Confidence-Driven Reinforcement Learning (CDRL) and Confidence-Aware Test-Time Scaling (CA-TTS), introducing confidence as a central signal to coordinate reasoning modules for the first time. Their approach integrates an ensemble of multi-role expert models under a unified scheduling and verification framework, leveraging original–noisy image pairs, a confidence-based reward function, and a Visual Self-Check mechanism. This design significantly enhances both perceptual sensitivity and calibration performance. Evaluated across four benchmarks, the method achieves an average improvement of 8.8% over prior state-of-the-art results, with ablation studies confirming the effectiveness and scalability of each component.
📝 Abstract
Recent advances in Multi-modal Large Language Models (MLLMs) have predominantly focused on enhancing visual perception to improve accuracy. However, a critical question remains unexplored: Do models know when they do not know? Through a probing experiment, we reveal a severe confidence miscalibration problem in MLLMs. To address this, we propose Confidence-Driven Reinforcement Learning (CDRL), which uses original-noise image pairs and a novel confidence-based reward to enhance perceptual sensitivity and robustly calibrate the model's confidence. Beyond training benefits, calibrated confidence enables more effective test-time scaling as a free lunch. We further propose Confidence-Aware Test-Time Scaling (CA-TTS), which dynamically coordinates Self-Consistency, Self-Reflection, and Visual Self-Check modules guided by confidence signals. An Expert Model acts in multiple roles (e.g., Planner, Critic, Voter) to schedule these modules and provide external verification. Our integrated framework establishes new state-of-the-art results with consistent 8.8% gains across four benchmarks. More ablation studies demonstrate the effectiveness of each module and scaling superiority.