🤖 AI Summary
This work addresses the lack of in-context learning (ICL) capability in video diffusion Transformers. We propose a method that activates ICL without modifying the model architecture—requiring only minimal fine-tuning. Our approach introduces (i) a spatiotemporally consistent video stitching strategy, (ii) a multi-clip joint caption generation mechanism, and (iii) few-shot task-specialized fine-tuning. Together, these enable the first ICL-based controllable generation for video diffusion models while preserving the original architecture. The method supports synthesis of videos exceeding 30 seconds, significantly improving cross-scene coherence, character consistency, and prompt alignment—all with zero additional inference overhead. To foster reproducibility and further research, we release our code, dataset, and pretrained weights.
📝 Abstract
This paper investigates a solution for enabling in-context capabilities of video diffusion transformers, with minimal tuning required for activation. Specifically, we propose a simple pipeline to leverage in-context generation: ($ extbf{i}$) concatenate videos along spacial or time dimension, ($ extbf{ii}$) jointly caption multi-scene video clips from one source, and ($ extbf{iii}$) apply task-specific fine-tuning using carefully curated small datasets. Through a series of diverse controllable tasks, we demonstrate qualitatively that existing advanced text-to-video models can effectively perform in-context generation. Notably, it allows for the creation of consistent multi-scene videos exceeding 30 seconds in duration, without additional computational overhead. Importantly, this method requires no modifications to the original models, results in high-fidelity video outputs that better align with prompt specifications and maintain role consistency. Our framework presents a valuable tool for the research community and offers critical insights for advancing product-level controllable video generation systems. The data, code, and model weights are publicly available at: https://github.com/feizc/Video-In-Context.