Video Diffusion Transformers are In-Context Learners

📅 2024-12-14
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of in-context learning (ICL) capability in video diffusion Transformers. We propose a method that activates ICL without modifying the model architecture—requiring only minimal fine-tuning. Our approach introduces (i) a spatiotemporally consistent video stitching strategy, (ii) a multi-clip joint caption generation mechanism, and (iii) few-shot task-specialized fine-tuning. Together, these enable the first ICL-based controllable generation for video diffusion models while preserving the original architecture. The method supports synthesis of videos exceeding 30 seconds, significantly improving cross-scene coherence, character consistency, and prompt alignment—all with zero additional inference overhead. To foster reproducibility and further research, we release our code, dataset, and pretrained weights.

Technology Category

Application Category

📝 Abstract
This paper investigates a solution for enabling in-context capabilities of video diffusion transformers, with minimal tuning required for activation. Specifically, we propose a simple pipeline to leverage in-context generation: ($ extbf{i}$) concatenate videos along spacial or time dimension, ($ extbf{ii}$) jointly caption multi-scene video clips from one source, and ($ extbf{iii}$) apply task-specific fine-tuning using carefully curated small datasets. Through a series of diverse controllable tasks, we demonstrate qualitatively that existing advanced text-to-video models can effectively perform in-context generation. Notably, it allows for the creation of consistent multi-scene videos exceeding 30 seconds in duration, without additional computational overhead. Importantly, this method requires no modifications to the original models, results in high-fidelity video outputs that better align with prompt specifications and maintain role consistency. Our framework presents a valuable tool for the research community and offers critical insights for advancing product-level controllable video generation systems. The data, code, and model weights are publicly available at: https://github.com/feizc/Video-In-Context.
Problem

Research questions and friction points this paper is trying to address.

Enable in-context learning for video diffusion transformers
Generate consistent multi-scene videos without extra overhead
Improve prompt alignment and role consistency in outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverage in-context video generation
Concatenate videos along dimensions
Task-specific fine-tuning with datasets
🔎 Similar Papers
No similar papers found.