InteractiveOmni: A Unified Omni-modal Model for Audio-Visual Multi-turn Dialogue

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in audio-visual multimodal dialogue—namely, poor cross-modal understanding–speech generation synergy and weak long-term memory modeling. To this end, we propose InteractiveOmni-4B, the first lightweight unified multimodal large language model (MLLM) tailored for audio-visual multi-turn dialogue. Our method introduces an end-to-end architecture integrating visual/audio encoders, a language model (LLM), and a speech decoder, coupled with a novel multi-stage training paradigm comprising cross-modal alignment pretraining and multi-turn dialogue post-training. We further construct a high-quality multi-turn audio-video dialogue dataset and the first evaluation benchmark supporting both long-term memory and spoken interaction. Experiments demonstrate that our 4-billion-parameter model consistently outperforms leading open-source small MLLMs across image, audio, and video understanding as well as speech generation tasks—matching the performance of 7B-class models—while significantly improving cross-modal coherence and human-like dialogue fluency.

Technology Category

Application Category

📝 Abstract
We introduce InteractiveOmni, a unified and open-source omni-modal large language model for audio-visual multi-turn interaction, ranging from 4B to 8B parameters, designed to lead the field of lightweight models by offering comprehensive omni-modal understanding and speech generation capabilities. To achieve this, we integrate the vision encoder, audio encoder, large language model, and speech decoder into a unified model for understanding and generation tasks. We design a multi-stage training strategy to ensure robust cross-modal capabilities, including pre-training for omni-modal understanding, followed by post-training with speech conversation and audio-visual interaction. To enable human-like long-term conversational ability, we meticulously curate a multi-turn training dataset that enhances the model's ability to handle complex and multi-turn interactions. To effectively evaluate the multi-turn memory and speech interaction capabilities, we construct the multi-modal multi-turn memory benchmark and the multi-turn speech interaction benchmark. Experiments demonstrate that InteractiveOmni significantly outperforms leading open-source models and provides a more intelligent multi-turn audio-visual experience, particularly in its long-term memory capabilities. Notably, InteractiveOmni-4B is comparable to the much larger model like Qwen2.5-Omni-7B on general benchmarks, and it can retain 97% of the performance of the InteractiveOmni-8B while utilizing only 50% of the model size. Achieving state-of-the-art results against similarly sized models across image, audio, video understanding, and speech generation tasks, InteractiveOmni is an accessible, open-source foundation for next-generation intelligent interactive systems.
Problem

Research questions and friction points this paper is trying to address.

Developing lightweight multimodal model for audiovisual dialogue
Enabling longterm conversational memory through curated datasets
Creating unified architecture for understanding and speech generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified model integrates vision audio language speech
Multi-stage training strategy for cross-modal capabilities
Multi-turn dataset enhances long-term conversational ability
🔎 Similar Papers
No similar papers found.
Wenwen Tong
Wenwen Tong
Peking University
Fluid MechanicsDeep Learning
H
Hewei Guo
SenseTime Research
D
Dongchuan Ran
SenseTime Research
J
Jiangnan Chen
SenseTime Research
J
Jiefan Lu
SenseTime Research
Kaibin Wang
Kaibin Wang
SenseTime Research
Keqiang Li
Keqiang Li
Department of Automotive Engineering, Tsinghua University
Intelligent VehiclesAdvanced Driver Assistant Systems
Xiaoxu Zhu
Xiaoxu Zhu
Soochow University
J
Jiakui Li
SenseTime Research
Kehan Li
Kehan Li
Stanford University
X
Xueheng Li
SenseTime Research
L
Lumin Li
SenseTime Research
C
Chenxu Guo
SenseTime Research
J
Jiasheng Zhou
SenseTime Research
J
Jiandong Chen
SenseTime Research
X
Xianye Wu
SenseTime Research
J
Jiahao Wang
SenseTime Research
S
Silei Wu
SenseTime Research
Hanming Deng
Hanming Deng
Unknown affiliation
Deep LearningObject Detection
Yuxuan Song
Yuxuan Song
Tsinghua University
Deep Generative ModelsLLM4Science,
D
Dinghao Zhou
SenseTime Research
G
Guiping Zhong
SenseTime Research
K
Ken Zheng
SenseTime Research
Shiyin Kang
Shiyin Kang
SenseTime Research
Lewei Lu
Lewei Lu
Research Director (We're Hiring, luotto@sensetime.com) @ SenseTime Research
Computer VisionDeep Learning