🤖 AI Summary
Existing video generation methods struggle to achieve flexible semantic control due to rigid spatial constraints or a lack of cross-conditional interoperability. This work proposes a lightweight, unified framework for semantically controllable video generation by dynamically producing personalized LoRA weights via a hypernetwork for arbitrary semantic inputs. An adaptive LoRA module is constructed through auxiliary matrices and integrated into a frozen diffusion backbone, eliminating the need for separate training per condition. The approach achieves zero-shot generalization to unseen semantic conditions for the first time, with a model size under 150 MB. It generates videos that are semantically consistent yet stylistically diverse across various control signals, substantially reducing deployment costs.
📝 Abstract
Achieving semantic alignment across diverse video generation conditions remains a significant challenge. Methods that rely on explicit structural guidance often enforce rigid spatial constraints that limit semantic flexibility, whereas models tailored for individual control types lack interoperability and adaptability. These design bottlenecks hinder progress toward flexible and efficient semantic video generation. To address this, we propose Video2LoRA, a scalable and generalizable framework for semantic-controlled video generation that conditions on a reference video. Video2LoRA employs a lightweight hypernetwork to predict personalized LoRA weights for each semantic input, which are combined with auxiliary matrices to form adaptive LoRA modules integrated into a frozen diffusion backbone. This design enables the model to generate videos consistent with the reference semantics while preserving key style and content variations, eliminating the need for any per-condition training. Notably, the final model weights less than 150MB, making it highly efficient for storage and deployment. Video2LoRA achieves coherent, semantically aligned generation across diverse conditions and exhibits strong zero-shot generalization to unseen semantics.