EndoCoT: Scaling Endogenous Chain-of-Thought Reasoning in Diffusion Models

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current diffusion models, where multimodal large language models (MLLMs) employed as text encoders suffer from insufficient reasoning depth and static guidance signals, hindering their capacity for spatial and logical reasoning in complex tasks. To overcome this, the authors propose the Endogenous Chain-of-Thought (EndoCoT) framework, which introduces an iteratively evolving chain-of-thought mechanism into diffusion models for the first time. EndoCoT dynamically activates the MLLM’s reasoning capabilities through an iterative thought-guidance module and aligns these thoughts with the denoising process of the Diffusion Transformer (DiT) via a terminal thought-grounding module, enabling stepwise, semantically consistent solutions to complex problems. The method achieves an average accuracy of 92.1% across benchmarks including Maze, TSP, VSP, and Sudoku, outperforming the strongest baseline by 8.3 percentage points.

Technology Category

Application Category

📝 Abstract
Recently, Multimodal Large Language Models (MLLMs) have been widely integrated into diffusion frameworks primarily as text encoders to tackle complex tasks such as spatial reasoning. However, this paradigm suffers from two critical limitations: (i) MLLMs text encoder exhibits insufficient reasoning depth. Single-step encoding fails to activate the Chain-of-Thought process, which is essential for MLLMs to provide accurate guidance for complex tasks. (ii) The guidance remains invariant during the decoding process. Invariant guidance during decoding prevents DiT from progressively decomposing complex instructions into actionable denoising steps, even with correct MLLM encodings. To this end, we propose Endogenous Chain-of-Thought (EndoCoT), a novel framework that first activates MLLMs' reasoning potential by iteratively refining latent thought states through an iterative thought guidance module, and then bridges these states to the DiT's denoising process. Second, a terminal thought grounding module is applied to ensure the reasoning trajectory remains grounded in textual supervision by aligning the final state with ground-truth answers. With these two components, the MLLM text encoder delivers meticulously reasoned guidance, enabling the DiT to execute it progressively and ultimately solve complex tasks in a step-by-step manner. Extensive evaluations across diverse benchmarks (e.g., Maze, TSP, VSP, and Sudoku) achieve an average accuracy of 92.1%, outperforming the strongest baseline by 8.3 percentage points.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought
Diffusion Models
Multimodal Large Language Models
Spatial Reasoning
Iterative Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Endogenous Chain-of-Thought
Diffusion Models
Multimodal Large Language Models
Iterative Reasoning
Thought Guidance
🔎 Similar Papers
No similar papers found.