Rationale-Enhanced Decoding for Multi-modal Chain-of-Thought

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) often disregard generated intermediate rationales during multimodal Chain-of-Thought (CoT) reasoning, leading to unfaithful inference and reduced accuracy. To address this, we propose RED (Rationale-Enhanced Decoding), a plug-in, training-free decoding method that reformulates multimodal CoT as a reward maximization problem under KL divergence constraints. RED jointly models the conditional distributions of images and rationales, guiding autoregressive decoding via rationale-conditional log-likelihood—without modifying pretrained LVLM architectures or requiring fine-tuning. Empirically, RED significantly improves both reasoning accuracy and consistency across benchmarks including ScienceQA and MMStar. It consistently outperforms standard CoT and existing decoding strategies across diverse state-of-the-art LVLMs. Moreover, RED enhances decision interpretability and reliability by explicitly grounding final predictions in intermediate multimodal rationales.

Technology Category

Application Category

📝 Abstract
Large vision-language models (LVLMs) have demonstrated remarkable capabilities by integrating pre-trained vision encoders with large language models (LLMs). Similar to single-modal LLMs, chain-of-thought (CoT) prompting has been adapted for LVLMs to enhance multi-modal reasoning by generating intermediate rationales based on visual and textual inputs. While CoT is assumed to improve grounding and accuracy in LVLMs, our experiments reveal a key challenge: existing LVLMs often ignore the contents of generated rationales in CoT reasoning. To address this, we re-formulate multi-modal CoT reasoning as a KL-constrained reward maximization focused on rationale-conditional log-likelihood. As the optimal solution, we propose rationale-enhanced decoding (RED), a novel plug-and-play inference-time decoding strategy. RED harmonizes visual and rationale information by multiplying distinct image-conditional and rationale-conditional next token distributions. Extensive experiments show that RED consistently and significantly improves reasoning over standard CoT and other decoding methods across multiple benchmarks and LVLMs. Our work offers a practical and effective approach to improve both the faithfulness and accuracy of CoT reasoning in LVLMs, paving the way for more reliable rationale-grounded multi-modal systems.
Problem

Research questions and friction points this paper is trying to address.

LVLMs ignore generated rationales in CoT reasoning
Need to improve multi-modal CoT reasoning accuracy
Enhance faithfulness of rationale-grounded LVLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

KL-constrained reward maximization for CoT
Rationale-enhanced decoding (RED) strategy
Multiplies image and rationale token distributions
🔎 Similar Papers
No similar papers found.