🤖 AI Summary
Current text-to-image models exhibit significant limitations in multimodal input understanding and complex reasoning tasks—particularly mathematical reasoning. To address these challenges, we propose the Unified Multimodal Large Language Model (UMLLM), which jointly learns vision-language representations and enables fine-grained, stepwise reasoning generation. Methodologically, UMLLM integrates: (i) a decoder-only diffusion-based generation module; (ii) chain-of-thought supervised fine-tuning; and (iii) the novel Reasoning Generation Policy Optimization (RGPO) algorithm, which incorporates multimodal feedback into reinforcement learning to enable end-to-end optimization of the reasoning process. Extensive evaluations demonstrate that UMLLM achieves state-of-the-art performance across both understanding and generation benchmarks, with particularly notable gains in mathematical reasoning instruction-following and output quality. The implementation is publicly available.
📝 Abstract
Recent text-to-image systems face limitations in handling multimodal inputs and complex reasoning tasks. We introduce MindOmni, a unified multimodal large language model that addresses these challenges by incorporating reasoning generation through reinforcement learning. MindOmni leverages a three-phase training strategy: i) design of a unified vision language model with a decoder-only diffusion module, ii) supervised fine-tuning with Chain-of-Thought (CoT) instruction data, and iii) our proposed Reasoning Generation Policy Optimization (RGPO) algorithm, utilizing multimodal feedback to effectively guide policy updates. Experimental results demonstrate that MindOmni outperforms existing models, achieving impressive performance on both understanding and generation benchmarks, meanwhile showcasing advanced fine-grained reasoning generation capabilities, especially with mathematical reasoning instruction. All codes will be made public at href{https://github.com/EasonXiao-888/MindOmni}{https://github.com/EasonXiao-888/MindOmni}.