🤖 AI Summary
Diffusion models face two key challenges in image animation: (1) high-dimensional video data leads to scarce training samples, causing overfitting and memory-based generation rather than prompt-driven motion synthesis; and (2) poor generalization to unseen motion patterns, with insufficient research on few-shot motion adaptation. To address these, we propose Modular and Interchangeable Video Adapters (MIVA), which decouples motion into lightweight, independent, and composable specialized modules—each trainable with only ~10 samples. MIVA enables single-GPU training, prompt-free explicit motion selection, and parallel multi-module inference. Experiments demonstrate that MIVA preserves or even surpasses the visual quality of fully trained diffusion models while significantly improving motion controllability and novel motion generation—effectively overcoming the motion generalization bottleneck under few-shot settings.
📝 Abstract
Diffusion models (DMs) have recently achieved impressive photorealism in image and video generation. However, their application to image animation remains limited, even when trained on large-scale datasets. Two primary challenges contribute to this: the high dimensionality of video signals leads to a scarcity of training data, causing DMs to favor memorization over prompt compliance when generating motion; moreover, DMs struggle to generalize to novel motion patterns not present in the training set, and fine-tuning them to learn such patterns, especially using limited training data, is still under-explored. To address these limitations, we propose Modular Image-to-Video Adapter (MIVA), a lightweight sub-network attachable to a pre-trained DM, each designed to capture a single motion pattern and scalable via parallelization. MIVAs can be efficiently trained on approximately ten samples using a single consumer-grade GPU. At inference time, users can specify motion by selecting one or multiple MIVAs, eliminating the need for prompt engineering. Extensive experiments demonstrate that MIVA enables more precise motion control while maintaining, or even surpassing, the generation quality of models trained on significantly larger datasets.