Few-Shot-Based Modular Image-to-Video Adapter for Diffusion Models

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models face two key challenges in image animation: (1) high-dimensional video data leads to scarce training samples, causing overfitting and memory-based generation rather than prompt-driven motion synthesis; and (2) poor generalization to unseen motion patterns, with insufficient research on few-shot motion adaptation. To address these, we propose Modular and Interchangeable Video Adapters (MIVA), which decouples motion into lightweight, independent, and composable specialized modules—each trainable with only ~10 samples. MIVA enables single-GPU training, prompt-free explicit motion selection, and parallel multi-module inference. Experiments demonstrate that MIVA preserves or even surpasses the visual quality of fully trained diffusion models while significantly improving motion controllability and novel motion generation—effectively overcoming the motion generalization bottleneck under few-shot settings.

Technology Category

Application Category

📝 Abstract
Diffusion models (DMs) have recently achieved impressive photorealism in image and video generation. However, their application to image animation remains limited, even when trained on large-scale datasets. Two primary challenges contribute to this: the high dimensionality of video signals leads to a scarcity of training data, causing DMs to favor memorization over prompt compliance when generating motion; moreover, DMs struggle to generalize to novel motion patterns not present in the training set, and fine-tuning them to learn such patterns, especially using limited training data, is still under-explored. To address these limitations, we propose Modular Image-to-Video Adapter (MIVA), a lightweight sub-network attachable to a pre-trained DM, each designed to capture a single motion pattern and scalable via parallelization. MIVAs can be efficiently trained on approximately ten samples using a single consumer-grade GPU. At inference time, users can specify motion by selecting one or multiple MIVAs, eliminating the need for prompt engineering. Extensive experiments demonstrate that MIVA enables more precise motion control while maintaining, or even surpassing, the generation quality of models trained on significantly larger datasets.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited image animation in diffusion models
Overcomes data scarcity and motion generalization challenges
Enables precise motion control with minimal training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular lightweight sub-network for motion capture
Few-shot training with ten samples on consumer GPU
Parallelizable adapters for precise motion control without prompts
🔎 Similar Papers
No similar papers found.
Z
Zhenhao Li
Huawei Technologies Canada
S
Shaohan Yi
University of Waterloo
Z
Zheng Liu
Huawei Technologies Canada
L
Leonartinus Gao
University of British Columbia
M
Minh Ngoc Le
University of Toronto
A
Ambrose Ling
University of Toronto
Z
Zhuoran Wang
University of Waterloo
Md Amirul Islam
Md Amirul Islam
Center for Advanced AI, Accenture
Large Language ModelsComputer VisionDeep LearningMachine Learning
Zhixiang Chi
Zhixiang Chi
University of Toronto
Computer VisionMachine Learning
Y
Yuanhao Yu
Huawei Technologies Canada