π€ AI Summary
Existing omni-modal large language models (OLLMs) exhibit strong perceptual capabilities but limited complex reasoning performance, and enhancing their reasoning abilities is often hindered by the scarcity of high-quality data, difficulties in task adaptation, and high computational costs. This work proposes the first training-free framework for omni-modal reasoning enhancement, which transfers the reasoning capabilities of a text-only large reasoning model (LRM) to an OLLM through guided decoding. The approach introduces an βLRM-as-a-Guideβ mechanism coupled with a stepwise contrastive scaling strategy that dynamically balances multimodal perception and symbolic reasoning signals without manual hyperparameter tuning. Evaluated on six mainstream multimodal reasoning benchmarks, the method achieves significant performance gains, attaining 70.2 on MathVista and 75.5 on MMAU, thereby demonstrating its effectiveness and generalizability.
π Abstract
Omni-modal reasoning is essential for intelligent systems to understand and draw inferences from diverse data sources. While existing omni-modal large language models (OLLM) excel at perceiving diverse modalities, they lack the complex reasoning abilities of recent large reasoning models (LRM). However, enhancing the reasoning ability of OLLMs through additional training presents significant challenges, including the need for high-quality data, task-specific adaptation, and substantial computational costs. To address these limitations, we propose ThinkOmni, a training-free and data-free framework that lifts textual reasoning to omni-modal scenarios. ThinkOmni introduces two key components: 1) LRM-as-a-Guide, which leverages off-the-shelf LRMs to guide the OLLM decoding process; 2) Stepwise Contrastive Scaling, which adaptively balances perception and reasoning signals without manual hyperparameter tuning. Experiments on six multi-modal reasoning benchmarks demonstrate that ThinkOmni consistently delivers performance improvements, with main results achieving 70.2 on MathVista and 75.5 on MMAU. Overall, ThinkOmni offers a flexible and generalizable solution for omni-modal reasoning and provides new insights into the generalization and application of reasoning capabilities.