🤖 AI Summary
Existing omnimodal models support unified multimodal perception and generation but lack adaptive reasoning depth: they tend to over-reason on simple tasks and under-reason on complex ones. This paper proposes the first cross-modal (text/audio/vision) framework for adaptive reasoning depth control. Our method comprises: (1) constructing a multimodal joint difficulty-aware benchmark; (2) designing a two-stage training paradigm—adaptive supervised fine-tuning followed by multimodal reward-guided reinforcement learning (Adaptive GRPO); and (3) incorporating reasoning-augmented data and fine-grained behavioral optimization. Experiments demonstrate that our approach significantly outperforms state-of-the-art methods on multimodal reasoning benchmarks while maintaining high efficiency and accuracy. The code and datasets are publicly released.
📝 Abstract
Recent advances in Omni models have enabled unified multimodal perception and generation. However, most existing systems still exhibit rigid reasoning behaviors, either overthinking simple problems or failing to reason when necessary. To address this limitation, we propose Omni-AutoThink, a novel adaptive reasoning framework that dynamically adjusts the model's reasoning depth according to task difficulty. Our framework comprises two stages: (1) an Adaptive Supervised Fine-Tuning (Adaptive SFT) stage, which endows the Omni model with fundamental reasoning capability using large-scale reasoning-augmented data, and (2) an Adaptive Reinforcement Learning (Adaptive GRPO) stage, which optimizes reasoning behaviors based on task complexity and reward feedback. We further construct a comprehensive adaptive reasoning benchmark that spans text-only, text-audio, text-visual, and text-audio-visual modalities, providing both training and evaluation splits for multimodal reasoning assessment. Experimental results demonstrate that our proposed framework significantly improves adaptive reasoning performance compared to previous baselines. All benchmark data and code will be publicly released.