CAMD: Coverage-Aware Multimodal Decoding for Efficient Reasoning of Multimodal Large Language Models

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the mismatch between computational resources and task difficulty in multimodal large language models, which often leads to inefficient inference—wasting computation on simple samples while under-investing in challenging ones. The authors propose an adaptive inference mechanism that reveals a heavy-tailed distribution of multimodal reasoning difficulty and introduces a coverage-aware decoding strategy. This strategy dynamically allocates computational resources based on sample uncertainty, jointly modeling sampling coverage and risk through evidence-weighted scoring, posterior coverage estimation, and sequential Bayesian updating. Within a constrained token budget, the method effectively balances efficiency and reliability. Experiments demonstrate that the approach significantly outperforms existing decoding strategies across multiple benchmarks, simultaneously improving both inference accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have shown impressive reasoning capabilities across vision-language tasks, yet still face the challenge of compute-difficulty mismatch. Through empirical analyses, we identify that existing decoding methods may waste compute on easy cases while underserving hard ones, affecting both model effectiveness and efficiency. To address this issue, we first develop a theoretical framework that links sampling coverage, instance difficulty, and residual risk. Our analysis reveals that multimodal reasoning exhibits a heavy-tailed difficulty distribution; a small subset of hard or ambiguous samples dominates the residual failure probability. Based on this insight, we propose Coverage-Aware Multimodal Decoding (CAMD), an adaptive inference mechanism that dynamically allocates computation according to estimated uncertainty. CAMD integrates evidence-weighted scoring, posterior coverage estimation, and sequential Bayesian updating to balance efficiency and reliability under a limited token budget. Experiments on various benchmark datasets and baselines demonstrate the effectiveness and advantages of our approach.
Problem

Research questions and friction points this paper is trying to address.

compute-difficulty mismatch
multimodal reasoning
decoding efficiency
instance difficulty
residual risk
Innovation

Methods, ideas, or system contributions that make the work stand out.

Coverage-Aware Decoding
Multimodal Large Language Models
Adaptive Inference
Uncertainty Estimation
Bayesian Updating
🔎 Similar Papers
No similar papers found.