๐ค AI Summary
This work investigates the functional role of the initial frame in video generation models: beyond serving as a spatiotemporal starting point, it acts as a conceptual memory buffer for visual entities. Leveraging this insight, we propose a plug-and-play, first-frame-driven customization paradigm that requires no model architecture modification. By employing reference learning, our method injects target visual concepts from a small set of exemplars (20โ50 samples) directly into the initial frame, enabling it to provide persistent semantic anchoring and cross-frame consistency constraints throughout generation. The approach achieves fine-grained, robust, and generalizable control over video content. It significantly outperforms existing few-shot customization methods across diverse scenarios and, for the first time, systematically reveals and exploits the initial frameโs intrinsic capacity as a โlightweight conceptual memory carrier.โ
๐ Abstract
What role does the first frame play in video generation models? Traditionally, it's viewed as the spatial-temporal starting point of a video, merely a seed for subsequent animation. In this work, we reveal a fundamentally different perspective: video models implicitly treat the first frame as a conceptual memory buffer that stores visual entities for later reuse during generation. Leveraging this insight, we show that it's possible to achieve robust and generalized video content customization in diverse scenarios, using only 20-50 training examples without architectural changes or large-scale finetuning. This unveils a powerful, overlooked capability of video generation models for reference-based video customization.