Learning Modal-Mixed Chain-of-Thought Reasoning with Latent Embeddings

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a modality-hybrid chain-of-thought (CoT) approach to address the limitation of traditional language-based CoT in effectively modeling critical visual intermediate states during multimodal reasoning. The method seamlessly integrates visual latent embeddings into the CoT reasoning chain by leveraging a vision-language model (VLM) encoder to preserve semantic alignment, embedding compact visual sketches within textual reasoning steps, and employing a diffusion decoder to disentangle high-level intent from fine-grained perception. Trained jointly on VLM and language backbones through a two-stage optimization strategy—supervised fine-tuning followed by reinforcement learning—the approach significantly outperforms existing purely language-based and standard CoT methods across 11 diverse multimodal reasoning tasks, demonstrating the effectiveness and generalizability of modality-hybrid reasoning.

Technology Category

Application Category

📝 Abstract
We study how to extend chain-of-thought (CoT) beyond language to better handle multimodal reasoning. While CoT helps LLMs and VLMs articulate intermediate steps, its text-only form often fails on vision-intensive problems where key intermediate states are inherently visual. We introduce modal-mixed CoT, which interleaves textual tokens with compact visual sketches represented as latent embeddings. To bridge the modality gap without eroding the original knowledge and capability of the VLM, we use the VLM itself as an encoder and train the language backbone to reconstruct its own intermediate vision embeddings, to guarantee the semantic alignment of the visual latent space. We further attach a diffusion-based latent decoder, invoked by a special control token and conditioned on hidden states from the VLM. In this way, the diffusion head carries fine-grained perceptual details while the VLM specifies high-level intent, which cleanly disentangles roles and reduces the optimization pressure of the VLM. Training proceeds in two stages: supervised fine-tuning on traces that interleave text and latents with a joint next-token and latent-reconstruction objective, followed by reinforcement learning that teaches when to switch modalities and how to compose long reasoning chains. Extensive experiments across 11 diverse multimodal reasoning tasks, demonstrate that our method yields better performance than language-only and other CoT methods. Our code will be publicly released.
Problem

Research questions and friction points this paper is trying to address.

multimodal reasoning
chain-of-thought
visual reasoning
latent embeddings
vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

modal-mixed chain-of-thought
latent embeddings
vision-language models
diffusion decoder
multimodal reasoning
🔎 Similar Papers
No similar papers found.