Qwen2.5-Omni Technical Report

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses key challenges in end-to-end streaming multimodal understanding and generation—namely, cross-modal interference, temporal asynchrony, and high first-token latency. We propose the first unified architecture supporting joint perception of text, image, audio, and video, while simultaneously generating both textual and natural speech responses in a streaming fashion. Our core contributions are: (1) TMRoPE, a temporally aligned rotary position encoding for precise cross-modal temporal modeling; (2) a Thinker-Talker dual-track decoding framework that decouples semantic reasoning from speech synthesis to eliminate inter-modal interference; and (3) a sliding-window DiT-based speech decoder that substantially reduces first-token latency. Optimized via block-wise audiovisual encoders and end-to-end streaming training, our model achieves SOTA performance on benchmarks including Omni-Bench. It matches text-based input accuracy on speech-command understanding (MMLU/GSM8K) and surpasses state-of-the-art methods in robustness and naturalness for streaming speech synthesis.

Technology Category

Application Category

📝 Abstract
In this report, we present Qwen2.5-Omni, an end-to-end multimodal model designed to perceive diverse modalities, including text, images, audio, and video, while simultaneously generating text and natural speech responses in a streaming manner. To enable the streaming of multimodal information inputs, both audio and visual encoders utilize a block-wise processing approach. To synchronize the timestamps of video inputs with audio, we organize the audio and video sequentially in an interleaved manner and propose a novel position embedding approach, named TMRoPE(Time-aligned Multimodal RoPE). To concurrently generate text and speech while avoiding interference between the two modalities, we propose extbf{Thinker-Talker} architecture. In this framework, Thinker functions as a large language model tasked with text generation, while Talker is a dual-track autoregressive model that directly utilizes the hidden representations from the Thinker to produce audio tokens as output. Both the Thinker and Talker models are designed to be trained and inferred in an end-to-end manner. For decoding audio tokens in a streaming manner, we introduce a sliding-window DiT that restricts the receptive field, aiming to reduce the initial package delay. Qwen2.5-Omni is comparable with the similarly sized Qwen2.5-VL and outperforms Qwen2-Audio. Furthermore, Qwen2.5-Omni achieves state-of-the-art performance on multimodal benchmarks like Omni-Bench. Notably, Qwen2.5-Omni's performance in end-to-end speech instruction following is comparable to its capabilities with text inputs, as evidenced by benchmarks such as MMLU and GSM8K. As for speech generation, Qwen2.5-Omni's streaming Talker outperforms most existing streaming and non-streaming alternatives in robustness and naturalness.
Problem

Research questions and friction points this paper is trying to address.

Develops Qwen2.5-Omni for multimodal perception and response generation
Proposes Thinker-Talker architecture to avoid text-speech interference
Introduces TMRoPE for synchronized audio-video input processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Block-wise processing for streaming inputs
TMRoPE for synchronized audio-video alignment
Thinker-Talker architecture for text-speech generation
🔎 Similar Papers
No similar papers found.