🤖 AI Summary
This work addresses two key challenges in multimodal chain-of-thought (CoT) reasoning: difficulty in modeling visual state transitions and lack of coherence in reasoning trajectories. To this end, we propose a unified vision-language CoT reasoning framework. Methodologically, we introduce a two-level reasoning paradigm: macro-level CoT for high-level task planning and micro-level CoT for fine-grained vision-language subtask execution. We further incorporate image-text interleaved supervision and multi-task joint training to enable coherent cross-modal state modeling. Built upon a unified vision-language understanding-generation architecture, our approach employs a two-stage structured training strategy. Evaluated on multiple benchmarks—including WISE, RISE, and KRIS—our method achieves state-of-the-art performance. All experiments are conducted efficiently using only eight A100 GPUs. The framework significantly improves reasoning coherence, generalization capability, and training efficiency.
📝 Abstract
Chain-of-Thought (CoT) reasoning has been widely adopted to enhance Large Language Models (LLMs) by decomposing complex tasks into simpler, sequential subtasks. However, extending CoT to vision-language reasoning tasks remains challenging, as it often requires interpreting transitions of visual states to support reasoning. Existing methods often struggle with this due to limited capacity of modeling visual state transitions or incoherent visual trajectories caused by fragmented architectures.
To overcome these limitations, we propose Uni-CoT, a Unified Chain-of-Thought framework that enables coherent and grounded multimodal reasoning within a single unified model. The key idea is to leverage a model capable of both image understanding and generation to reason over visual content and model evolving visual states. However, empowering a unified model to achieve that is non-trivial, given the high computational cost and the burden of training. To address this, Uni-CoT introduces a novel two-level reasoning paradigm: A Macro-Level CoT for high-level task planning and A Micro-Level CoT for subtask execution. This design significantly reduces the computational overhead. Furthermore, we introduce a structured training paradigm that combines interleaved image-text supervision for macro-level CoT with multi-task objectives for micro-level CoT. Together, these innovations allow Uni-CoT to perform scalable and coherent multi-modal reasoning. Furthermore, thanks to our design, all experiments can be efficiently completed using only 8 A100 GPUs with 80GB VRAM each. Experimental results on reasoning-driven image generation benchmark (WISE) and editing benchmarks (RISE and KRIS) indicates that Uni-CoT demonstrates SOTA performance and strong generalization, establishing Uni-CoT as a promising solution for multi-modal reasoning. Project Page and Code: https://sais-fuxi.github.io/projects/uni-cot/