UniVerse-1: Unified Audio-Video Generation via Stitching of Experts

📅 2025-09-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of precise temporal and semantic alignment among environmental sounds, speech, and visual frames in audio-visual synchronous generation. We propose the Stitching of Experts (SoE) framework: it composes pre-trained video and audio foundation models—avoiding costly end-to-end training—and introduces an online audio-visual annotation pipeline to achieve frame-level synchronization, thereby eliminating temporal drift inherent in text-based supervision. Fine-tuned on approximately 7,600 hours of audio-visual data, SoE significantly outperforms baseline methods on environmental sound generation and speech–motion synchronization tasks, achieving performance competitive with Veo3 on our newly constructed benchmark, Verse-Bench. Our key contribution is the first integration of expert model composition with online audio-visual annotation, establishing a novel paradigm for efficient, high-fidelity audio-visual co-generation.

Technology Category

Application Category

📝 Abstract
We introduce UniVerse-1, a unified, Veo-3-like model capable of simultaneously generating coordinated audio and video. To enhance training efficiency, we bypass training from scratch and instead employ a stitching of experts (SoE) technique. This approach deeply fuses the corresponding blocks of pre-trained video and music generation experts models, thereby fully leveraging their foundational capabilities. To ensure accurate annotations and temporal alignment for both ambient sounds and speech with video content, we developed an online annotation pipeline that processes the required training data and generates labels during training process. This strategy circumvents the performance degradation often caused by misalignment text-based annotations. Through the synergy of these techniques, our model, after being finetuned on approximately 7,600 hours of audio-video data, produces results with well-coordinated audio-visuals for ambient sounds generation and strong alignment for speech generation. To systematically evaluate our proposed method, we introduce Verse-Bench, a new benchmark dataset. In an effort to advance research in audio-video generation and to close the performance gap with state-of-the-art models such as Veo3, we make our model and code publicly available. We hope this contribution will benefit the broader research community. Project page: https://dorniwang.github.io/UniVerse-1/.
Problem

Research questions and friction points this paper is trying to address.

Simultaneously generating coordinated audio and video content
Overcoming performance degradation from misaligned text annotations
Closing performance gap with state-of-the-art audio-video generation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stitching pre-trained video and music experts
Online annotation pipeline for temporal alignment
Fine-tuned on 7600 hours audio-video data
🔎 Similar Papers
No similar papers found.
Duomin Wang
Duomin Wang
senior researcher, Stepfun
computer vision
W
Wei Zuo
StepFun, The Hong Kong University of Science and Technology(GuangZhou)
A
Aojie Li
StepFun, The Hong Kong University of Science and Technology(GuangZhou)
Ling-Hao Chen
Ling-Hao Chen
Ph.D. Student, Tsinghua University, IDEA Research
Computer GraphicsComputer VisionCharacter Animation
Xinyao Liao
Xinyao Liao
Huazhong University of Science and Technology
Deyu Zhou
Deyu Zhou
Professor, School of computer science and engineering, SEU
natural language processing
Z
Zixin Yin
StepFun, The Hong Kong University of Science and Technology(GuangZhou), The Hong Kong University of Science and Technology
Xili Dai
Xili Dai
UC Berkeley; HKUST
computer vision
Daxin Jiang
Daxin Jiang
Co-Founder & CEO, StepFun Corporation
Deep LearningFoundation Models
G
Gang Yu
StepFun, The Hong Kong University of Science and Technology(GuangZhou)