Unlocking Cognitive Capabilities and Analyzing the Perception-Logic Trade-off

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of integrating perception and reasoning in multimodal large language models under data scarcity and culturally specific contexts in Southeast Asia. We propose MERaLiON2-Omni (Alpha), a 10B-parameter multilingual, fully perceptual model trained through a staged pipeline: first establishing a region-aware backbone, then injecting cognitive capabilities via a Generate-Judge-Refine process to decouple yet coordinate perception and reasoning. We uncover, for the first time, an efficiency–stability trade-off between perception and logical reasoning, and introduce a novel data synthesis method that requires no large-scale annotations. Additionally, we construct SEA-Omni Benchmark Suite—the first multimodal evaluation benchmark tailored for Southeast Asia. The model demonstrates significant gains in mathematical reasoning and instruction following, while diagnosing temporal drift in audio and visual over-interpretation, thereby quantifying the interference of high-level reasoning on low-level perception.

Technology Category

Application Category

📝 Abstract
Recent advancements in Multimodal Large Language Models (MLLMs) pursue omni-perception capabilities, yet integrating robust sensory grounding with complex reasoning remains a challenge, particularly for underrepresented regions. In this report, we introduce the research preview of MERaLiON2-Omni (Alpha), a 10B-parameter multilingual omni-perception tailored for Southeast Asia (SEA). We present a progressive training pipeline that explicitly decouples and then integrates"System 1"(Perception) and"System 2"(Reasoning) capabilities. First, we establish a robust Perception Backbone by aligning region-specific audio-visual cues (e.g., Singlish code-switching, local cultural landmarks) with a multilingual LLM through orthogonal modality adaptation. Second, to inject cognitive capabilities without large-scale supervision, we propose a cost-effective Generate-Judge-Refine pipeline. By utilizing a Super-LLM to filter hallucinations and resolve conflicts via a consensus mechanism, we synthesize high-quality silver data that transfers textual Chain-of-Thought reasoning to multimodal scenarios. Comprehensive evaluation on our newly introduced SEA-Omni Benchmark Suite reveals an Efficiency-Stability Paradox: while reasoning acts as a non-linear amplifier for abstract tasks (boosting mathematical and instruction-following performance significantly), it introduces instability in low-level sensory processing. Specifically, we identify Temporal Drift in long-context audio, where extended reasoning desynchronizes the model from acoustic timestamps, and Visual Over-interpretation, where logic overrides pixel-level reality. This report details the architecture, the data-efficient training recipe, and a diagnostic analysis of the trade-offs between robust perception and structured reasoning.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Perception-Reasoning Trade-off
Sensory Grounding
Cognitive Capabilities
Underrepresented Regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

omni-perception
perception-reasoning decoupling
multimodal chain-of-thought
data-efficient training
cognitive trade-off
🔎 Similar Papers
No similar papers found.