🤖 AI Summary
High-performance open-source multimodal large language models (MLLMs) remain scarce, hindering unified processing of images, videos, audio, and text with real-time interactive capabilities. To address this, we propose Baichuan-Omni—the first open-source 7B-parameter MLLM—introducing a novel two-stage training paradigm: “modality alignment” followed by “multi-task fine-tuning.” It integrates a CLIP-based visual encoder, a Whisper-based audio encoder, and learnable modality adapters to achieve unified cross-modal representation learning and joint reasoning across vision, audio, and language. Evaluated on major benchmarks—including OmniBench, MMBench, and VideoMME—Baichuan-Omni achieves state-of-the-art performance among open-source models. Moreover, it supports low-latency, real-time speech–vision–text interaction. By bridging the gap between capability and openness, Baichuan-Omni establishes a new foundation for accessible, high-fidelity multimodal intelligence research and applications.
📝 Abstract
The salient multimodal capabilities and interactive experience of GPT-4o highlight its critical role in practical applications, yet it lacks a high-performing open-source counterpart. In this paper, we introduce Baichuan-omni, the first open-source 7B Multimodal Large Language Model (MLLM) adept at concurrently processing and analyzing modalities of image, video, audio, and text, while delivering an advanced multimodal interactive experience and strong performance. We propose an effective multimodal training schema starting with 7B model and proceeding through two stages of multimodal alignment and multitask fine-tuning across audio, image, video, and text modal. This approach equips the language model with the ability to handle visual and audio data effectively. Demonstrating strong performance across various omni-modal and multimodal benchmarks, we aim for this contribution to serve as a competitive baseline for the open-source community in advancing multimodal understanding and real-time interaction.