ThinkOmni: Lifting Textual Reasoning to Omni-modal Scenarios via Guidance Decoding

πŸ“… 2026-02-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing omni-modal large language models (OLLMs) exhibit strong perceptual capabilities but limited complex reasoning performance, and enhancing their reasoning abilities is often hindered by the scarcity of high-quality data, difficulties in task adaptation, and high computational costs. This work proposes the first training-free framework for omni-modal reasoning enhancement, which transfers the reasoning capabilities of a text-only large reasoning model (LRM) to an OLLM through guided decoding. The approach introduces an β€œLRM-as-a-Guide” mechanism coupled with a stepwise contrastive scaling strategy that dynamically balances multimodal perception and symbolic reasoning signals without manual hyperparameter tuning. Evaluated on six mainstream multimodal reasoning benchmarks, the method achieves significant performance gains, attaining 70.2 on MathVista and 75.5 on MMAU, thereby demonstrating its effectiveness and generalizability.

Technology Category

Application Category

πŸ“ Abstract
Omni-modal reasoning is essential for intelligent systems to understand and draw inferences from diverse data sources. While existing omni-modal large language models (OLLM) excel at perceiving diverse modalities, they lack the complex reasoning abilities of recent large reasoning models (LRM). However, enhancing the reasoning ability of OLLMs through additional training presents significant challenges, including the need for high-quality data, task-specific adaptation, and substantial computational costs. To address these limitations, we propose ThinkOmni, a training-free and data-free framework that lifts textual reasoning to omni-modal scenarios. ThinkOmni introduces two key components: 1) LRM-as-a-Guide, which leverages off-the-shelf LRMs to guide the OLLM decoding process; 2) Stepwise Contrastive Scaling, which adaptively balances perception and reasoning signals without manual hyperparameter tuning. Experiments on six multi-modal reasoning benchmarks demonstrate that ThinkOmni consistently delivers performance improvements, with main results achieving 70.2 on MathVista and 75.5 on MMAU. Overall, ThinkOmni offers a flexible and generalizable solution for omni-modal reasoning and provides new insights into the generalization and application of reasoning capabilities.
Problem

Research questions and friction points this paper is trying to address.

omni-modal reasoning
large reasoning models
multimodal perception
reasoning enhancement
training challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

omni-modal reasoning
training-free framework
LRM-as-a-Guide
Stepwise Contrastive Scaling
guidance decoding
πŸ”Ž Similar Papers
No similar papers found.