Controllable Video Generation: A Survey

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-video generation models struggle to support complex, fine-grained user control. This paper presents a systematic survey of controllable video generation, proposing a three-tier taxonomy—uniconditional, multiconditional, and general controllable—categorized by control signal modality. It unifies the theoretical foundations and fusion mechanisms for multimodal conditioning (e.g., camera motion, depth maps, human pose) within diffusion-based video generation. Methodologically, we design a conditional guidance architecture tailored for video diffusion models, enabling joint spatiotemporal integration of textual and non-textual control signals during denoising. Key contributions include: (1) the first comprehensive taxonomy for controllable video generation; (2) an open-source, unified evaluation benchmark and codebase; and (3) significantly enhanced controllability and practical applicability of AIGC in real-world scenarios.

Technology Category

Application Category

📝 Abstract
With the rapid development of AI-generated content (AIGC), video generation has emerged as one of its most dynamic and impactful subfields. In particular, the advancement of video generation foundation models has led to growing demand for controllable video generation methods that can more accurately reflect user intent. Most existing foundation models are designed for text-to-video generation, where text prompts alone are often insufficient to express complex, multi-modal, and fine-grained user requirements. This limitation makes it challenging for users to generate videos with precise control using current models. To address this issue, recent research has explored the integration of additional non-textual conditions, such as camera motion, depth maps, and human pose, to extend pretrained video generation models and enable more controllable video synthesis. These approaches aim to enhance the flexibility and practical applicability of AIGC-driven video generation systems. In this survey, we provide a systematic review of controllable video generation, covering both theoretical foundations and recent advances in the field. We begin by introducing the key concepts and commonly used open-source video generation models. We then focus on control mechanisms in video diffusion models, analyzing how different types of conditions can be incorporated into the denoising process to guide generation. Finally, we categorize existing methods based on the types of control signals they leverage, including single-condition generation, multi-condition generation, and universal controllable generation. For a complete list of the literature on controllable video generation reviewed, please visit our curated repository at https://github.com/mayuelala/Awesome-Controllable-Video-Generation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing user control in video generation models
Integrating non-textual conditions for precise video synthesis
Surveying control mechanisms in video diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates non-textual conditions like camera motion
Extends pretrained models for controllable synthesis
Categorizes methods by control signal types
🔎 Similar Papers
No similar papers found.