SkyReels-V4: Multi-modal Video-Audio Generation, Inpainting and Editing model

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video generation models struggle to simultaneously support multimodal inputs, synchronized audio-visual generation, and diverse editing capabilities under high-resolution and long-duration conditions. This work proposes a unified multimodal video foundation model based on a dual-stream Multimodal Diffusion Transformer (MMDiT) architecture, capable of jointly generating cinematic-quality audio-visual content at 1080p resolution, 32 fps, and 15 seconds with precise spatiotemporal alignment. The model integrates multiple conditioning modalities—including text, images, video clips, masks, and audio—through a synergistic strategy that combines low-resolution full-sequence generation with high-resolution keyframe refinement, ensuring both computational efficiency and high fidelity. By innovatively incorporating a multimodal large language model for text encoding, a channel-concatenation-based inpainting mechanism, and a post-hoc super-resolution module, this framework achieves, for the first time, unified audio-visual generation and editing from multimodal inputs, supporting tasks such as image-to-video synthesis, video extension, and visually referenced editing.

Technology Category

Application Category

📝 Abstract
SkyReels V4 is a unified multi modal video foundation model for joint video audio generation, inpainting, and editing. The model adopts a dual stream Multimodal Diffusion Transformer (MMDiT) architecture, where one branch synthesizes video and the other generates temporally aligned audio, while sharing a powerful text encoder based on the Multimodal Large Language Models (MMLM). SkyReels V4 accepts rich multi modal instructions, including text, images, video clips, masks, and audio references. By combining the MMLMs multi modal instruction following capability with in context learning in the video branch MMDiT, the model can inject fine grained visual guidance under complex conditioning, while the audio branch MMDiT simultaneously leverages audio references to guide sound generation. On the video side, we adopt a channel concatenation formulation that unifies a wide range of inpainting style tasks, such as image to video, video extension, and video editing under a single interface, and naturally extends to vision referenced inpainting and editing via multi modal prompts. SkyReels V4 supports up to 1080p resolution, 32 FPS, and 15 second duration, enabling high fidelity, multi shot, cinema level video generation with synchronized audio. To make such high resolution, long-duration generation computationally feasible, we introduce an efficiency strategy: Joint generation of low resolution full sequences and high-resolution keyframes, followed by dedicated super-resolution and frame interpolation models. To our knowledge, SkyReels V4 is the first video foundation model that simultaneously supports multi-modal input, joint video audio generation, and a unified treatment of generation, inpainting, and editing, while maintaining strong efficiency and quality at cinematic resolutions and durations.
Problem

Research questions and friction points this paper is trying to address.

multi-modal video generation
audio-visual synchronization
video inpainting
video editing
high-resolution video synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Diffusion Transformer
joint video-audio generation
unified inpainting and editing
multimodal instruction following
high-resolution long-duration video generation