Many-for-Many: Unify the Training of Multiple Video and Image Generation and Manipulation Tasks

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision generative models are predominantly single-task architectures reliant on large-scale, high-quality annotated datasets, resulting in limited generalization and narrow task coverage. To address this, we propose the first many-for-many multi-task unified framework, trained end-to-end from scratch on heterogeneous multimodal data—including text-to-image/video, image-to-video, video-to-video, and diverse image/video editing tasks—using lightweight conditional adapters and joint image-video progressive training. A key innovation is the incorporation of depth maps as 3D spatial priors, enabling unified support for over ten distinct generation and editing tasks (e.g., T2V, I2V, V2V, inpainting, motion transfer). We release two open-source model variants: a high-fidelity 8B-parameter version whose video generation quality rivals leading open and commercial engines, and a compact 2B-parameter variant. All code and models are publicly available.

Technology Category

Application Category

📝 Abstract
Diffusion models have shown impressive performance in many visual generation and manipulation tasks. Many existing methods focus on training a model for a specific task, especially, text-to-video (T2V) generation, while many other works focus on finetuning the pretrained T2V model for image-to-video (I2V), video-to-video (V2V), image and video manipulation tasks, etc. However, training a strong T2V foundation model requires a large amount of high-quality annotations, which is very costly. In addition, many existing models can perform only one or several tasks. In this work, we introduce a unified framework, namely many-for-many, which leverages the available training data from many different visual generation and manipulation tasks to train a single model for those different tasks. Specifically, we design a lightweight adapter to unify the different conditions in different tasks, then employ a joint image-video learning strategy to progressively train the model from scratch. Our joint learning leads to a unified visual generation and manipulation model with improved video generation performance. In addition, we introduce depth maps as a condition to help our model better perceive the 3D space in visual generation. Two versions of our model are trained with different model sizes (8B and 2B), each of which can perform more than 10 different tasks. In particular, our 8B model demonstrates highly competitive performance in video generation tasks compared to open-source and even commercial engines. Our models and source codes are available at https://github.com/leeruibin/MfM.git.
Problem

Research questions and friction points this paper is trying to address.

Unified training for multiple video and image generation tasks
Joint learning strategy improves video generation performance
Lightweight adapter unifies conditions across diverse visual tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for multiple visual tasks
Lightweight adapter for different conditions
Joint image-video learning strategy
🔎 Similar Papers
No similar papers found.
T
Tao Yang
ByteDance
Ruibin Li
Ruibin Li
University of Toronto
Persistent MemoryFile System
Y
Yangming Shi
ByteDance
Y
Yuqi Zhang
ByteDance
Q
Qide Dong
ByteDance
Haoran Cheng
Haoran Cheng
Zhejiang University
Deep LearningComputer Vision
W
Weiguo Feng
ByteDance
Shilei Wen
Shilei Wen
bytedance.com
computer visionmachine learning
Bingyue Peng
Bingyue Peng
Bytedance
Generative AI
L
Lei Zhang
The Hong Kong Polytechnic University