ACD: Direct Conditional Control for Video Diffusion Models via Attention Supervision

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video diffusion models suffer from insufficient conditional control fidelity: classifier-free guidance struggles to precisely satisfy textual instructions, while classifier-based guidance often introduces artifacts and offers limited controllability. This paper proposes a direct, explicit conditional control framework. Our method integrates (1) an attention map alignment mechanism—the first of its kind—to enable fine-grained, step-wise conditional supervision during generation; (2) sparse 3D-aware object layouts as lightweight, spatiotemporal conditioning signals; and (3) a Layout ControlNet architecture with an automated layout annotation pipeline. Compatible with the classifier-free diffusion paradigm, our approach significantly improves conditional alignment accuracy across multiple benchmarks—e.g., T2V-Bench and VidBench—while preserving strong temporal coherence and visual fidelity. Ablations confirm that attention alignment and sparse 3D layouts jointly enhance controllability without compromising generation quality.

Technology Category

Application Category

📝 Abstract
Controllability is a fundamental requirement in video synthesis, where accurate alignment with conditioning signals is essential. Existing classifier-free guidance methods typically achieve conditioning indirectly by modeling the joint distribution of data and conditions, which often results in limited controllability over the specified conditions. Classifier-based guidance enforces conditions through an external classifier, but the model may exploit this mechanism to raise the classifier score without genuinely satisfying the intended condition, resulting in adversarial artifacts and limited effective controllability. In this paper, we propose Attention-Conditional Diffusion (ACD), a novel framework for direct conditional control in video diffusion models via attention supervision. By aligning the model's attention maps with external control signals, ACD achieves better controllability. To support this, we introduce a sparse 3D-aware object layout as an efficient conditioning signal, along with a dedicated Layout ControlNet and an automated annotation pipeline for scalable layout integration. Extensive experiments on benchmark video generation datasets demonstrate that ACD delivers superior alignment with conditioning inputs while preserving temporal coherence and visual fidelity, establishing an effective paradigm for conditional video synthesis.
Problem

Research questions and friction points this paper is trying to address.

Enhances controllability in video diffusion models
Directly aligns attention maps with external control signals
Improves conditioning accuracy and reduces adversarial artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct conditional control via attention supervision
Sparse 3D-aware object layout as conditioning signal
Layout ControlNet with automated annotation pipeline
🔎 Similar Papers