🤖 AI Summary
This work addresses key challenges in audio-visual multimodal dialogue—namely, poor cross-modal understanding–speech generation synergy and weak long-term memory modeling. To this end, we propose InteractiveOmni-4B, the first lightweight unified multimodal large language model (MLLM) tailored for audio-visual multi-turn dialogue. Our method introduces an end-to-end architecture integrating visual/audio encoders, a language model (LLM), and a speech decoder, coupled with a novel multi-stage training paradigm comprising cross-modal alignment pretraining and multi-turn dialogue post-training. We further construct a high-quality multi-turn audio-video dialogue dataset and the first evaluation benchmark supporting both long-term memory and spoken interaction. Experiments demonstrate that our 4-billion-parameter model consistently outperforms leading open-source small MLLMs across image, audio, and video understanding as well as speech generation tasks—matching the performance of 7B-class models—while significantly improving cross-modal coherence and human-like dialogue fluency.
📝 Abstract
We introduce InteractiveOmni, a unified and open-source omni-modal large language model for audio-visual multi-turn interaction, ranging from 4B to 8B parameters, designed to lead the field of lightweight models by offering comprehensive omni-modal understanding and speech generation capabilities. To achieve this, we integrate the vision encoder, audio encoder, large language model, and speech decoder into a unified model for understanding and generation tasks. We design a multi-stage training strategy to ensure robust cross-modal capabilities, including pre-training for omni-modal understanding, followed by post-training with speech conversation and audio-visual interaction. To enable human-like long-term conversational ability, we meticulously curate a multi-turn training dataset that enhances the model's ability to handle complex and multi-turn interactions. To effectively evaluate the multi-turn memory and speech interaction capabilities, we construct the multi-modal multi-turn memory benchmark and the multi-turn speech interaction benchmark. Experiments demonstrate that InteractiveOmni significantly outperforms leading open-source models and provides a more intelligent multi-turn audio-visual experience, particularly in its long-term memory capabilities. Notably, InteractiveOmni-4B is comparable to the much larger model like Qwen2.5-Omni-7B on general benchmarks, and it can retain 97% of the performance of the InteractiveOmni-8B while utilizing only 50% of the model size. Achieving state-of-the-art results against similarly sized models across image, audio, video understanding, and speech generation tasks, InteractiveOmni is an accessible, open-source foundation for next-generation intelligent interactive systems.