Omni-AutoThink: Adaptive Multimodal Reasoning via Reinforcement Learning

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing omnimodal models support unified multimodal perception and generation but lack adaptive reasoning depth: they tend to over-reason on simple tasks and under-reason on complex ones. This paper proposes the first cross-modal (text/audio/vision) framework for adaptive reasoning depth control. Our method comprises: (1) constructing a multimodal joint difficulty-aware benchmark; (2) designing a two-stage training paradigm—adaptive supervised fine-tuning followed by multimodal reward-guided reinforcement learning (Adaptive GRPO); and (3) incorporating reasoning-augmented data and fine-grained behavioral optimization. Experiments demonstrate that our approach significantly outperforms state-of-the-art methods on multimodal reasoning benchmarks while maintaining high efficiency and accuracy. The code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Recent advances in Omni models have enabled unified multimodal perception and generation. However, most existing systems still exhibit rigid reasoning behaviors, either overthinking simple problems or failing to reason when necessary. To address this limitation, we propose Omni-AutoThink, a novel adaptive reasoning framework that dynamically adjusts the model's reasoning depth according to task difficulty. Our framework comprises two stages: (1) an Adaptive Supervised Fine-Tuning (Adaptive SFT) stage, which endows the Omni model with fundamental reasoning capability using large-scale reasoning-augmented data, and (2) an Adaptive Reinforcement Learning (Adaptive GRPO) stage, which optimizes reasoning behaviors based on task complexity and reward feedback. We further construct a comprehensive adaptive reasoning benchmark that spans text-only, text-audio, text-visual, and text-audio-visual modalities, providing both training and evaluation splits for multimodal reasoning assessment. Experimental results demonstrate that our proposed framework significantly improves adaptive reasoning performance compared to previous baselines. All benchmark data and code will be publicly released.
Problem

Research questions and friction points this paper is trying to address.

Adaptive reasoning depth adjustment for multimodal tasks
Overcoming rigid reasoning in unified perception-generation models
Dynamic optimization of reasoning based on task complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive reasoning depth adjustment via reinforcement learning
Two-stage training with supervised fine-tuning and reinforcement learning
Comprehensive multimodal benchmark for adaptive reasoning assessment
🔎 Similar Papers
No similar papers found.