AlignVid: Training-Free Attention Scaling for Semantic Fidelity in Text-Guided Image-to-Video Generation

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-guided image-to-video (TI2V) methods often neglect fine-grained prompt semantics when substantial input image modifications—such as object addition, removal, or replacement—are required. To address this, we propose a training-free attention scaling framework that optimizes cross-attention entropy from an energy perspective, enabling the first foreground-background separation for precise spatial control. Our method comprises lightweight query/key scaling, cross-attention distribution modulation, selective intervention over spatial blocks and denoising timesteps, and a parameter-free guidance scheduling mechanism inspired by Gaussian blur. Evaluated on our newly constructed benchmark OmitI2V, the approach significantly improves semantic consistency and prompt adherence while preserving high-fidelity visual generation quality.

Technology Category

Application Category

📝 Abstract
Text-guided image-to-video (TI2V) generation has recently achieved remarkable progress, particularly in maintaining subject consistency and temporal coherence. However, existing methods still struggle to adhere to fine-grained prompt semantics, especially when prompts entail substantial transformations of the input image (e.g., object addition, deletion, or modification), a shortcoming we term semantic negligence. In a pilot study, we find that applying a Gaussian blur to the input image improves semantic adherence. Analyzing attention maps, we observe clearer foreground-background separation. From an energy perspective, this corresponds to a lower-entropy cross-attention distribution. Motivated by this, we introduce AlignVid, a training-free framework with two components: (i) Attention Scaling Modulation (ASM), which directly reweights attention via lightweight Q or K scaling, and (ii) Guidance Scheduling (GS), which applies ASM selectively across transformer blocks and denoising steps to reduce visual quality degradation. This minimal intervention improves prompt adherence while limiting aesthetic degradation. In addition, we introduce OmitI2V to evaluate semantic negligence in TI2V generation, comprising 367 human-annotated samples that span addition, deletion, and modification scenarios. Extensive experiments demonstrate that AlignVid can enhance semantic fidelity.
Problem

Research questions and friction points this paper is trying to address.

Addresses semantic negligence in text-guided image-to-video generation
Improves adherence to fine-grained prompt semantics like object modifications
Enhances semantic fidelity without training via attention scaling modulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free attention scaling modulation for semantic adherence
Selective guidance scheduling across transformer blocks and steps
Lightweight Q or K scaling to reweight attention maps
Yexin Liu
Yexin Liu
The Hong Kong University of Science and Technology
AIGC
W
Wen-Jie Shu
Hong Kong University of Science and Technology
Z
Zile Huang
University of Central Florida
H
Haoze Zheng
Hong Kong University of Science and Technology
Yueze Wang
Yueze Wang
Beijing Academy of Artificial Intelligence (BAAI)
MultimodalData-centric AI
M
Manyuan Zhang
The Chinese University of Hong Kong
S
Ser-Nam Lim
University of Central Florida
Harry Yang
Harry Yang
HKUST
computer visionmachine learning