🤖 AI Summary
To address the bottleneck of imprecise user intent parsing in controllable video generation, this paper proposes a novel conditional disentanglement paradigm: multimodal inputs—including text, images, videos, and fine-grained cues (e.g., regions, motion, camera poses)—are uniformly mapped to structured dense captions, which then drive video synthesis. The core contributions are twofold: (1) the construction of Any2CapIns, the first instruction-tuned dataset supporting arbitrary-modal-to-caption mapping (337K samples); and (2) a multimodal large language model (MLLM)-based framework for cross-modal semantic alignment and structured caption generation. Our method enables compositional, cross-modal, and fine-grained control. Evaluated on multiple state-of-the-art video generation models, it significantly improves controllability—achieving a +28.6% instruction-following rate—and generation quality—reducing Fréchet Video Distance (FVD) by 12.4%.
📝 Abstract
To address the bottleneck of accurate user intent interpretation within the current video generation community, we present Any2Caption, a novel framework for controllable video generation under any condition. The key idea is to decouple various condition interpretation steps from the video synthesis step. By leveraging modern multimodal large language models (MLLMs), Any2Caption interprets diverse inputs--text, images, videos, and specialized cues such as region, motion, and camera poses--into dense, structured captions that offer backbone video generators with better guidance. We also introduce Any2CapIns, a large-scale dataset with 337K instances and 407K conditions for any-condition-to-caption instruction tuning. Comprehensive evaluations demonstrate significant improvements of our system in controllability and video quality across various aspects of existing video generation models. Project Page: https://sqwu.top/Any2Cap/