🤖 AI Summary
Existing text-guided image-to-video (TI2V) methods often neglect fine-grained prompt semantics when substantial input image modifications—such as object addition, removal, or replacement—are required. To address this, we propose a training-free attention scaling framework that optimizes cross-attention entropy from an energy perspective, enabling the first foreground-background separation for precise spatial control. Our method comprises lightweight query/key scaling, cross-attention distribution modulation, selective intervention over spatial blocks and denoising timesteps, and a parameter-free guidance scheduling mechanism inspired by Gaussian blur. Evaluated on our newly constructed benchmark OmitI2V, the approach significantly improves semantic consistency and prompt adherence while preserving high-fidelity visual generation quality.
📝 Abstract
Text-guided image-to-video (TI2V) generation has recently achieved remarkable progress, particularly in maintaining subject consistency and temporal coherence. However, existing methods still struggle to adhere to fine-grained prompt semantics, especially when prompts entail substantial transformations of the input image (e.g., object addition, deletion, or modification), a shortcoming we term semantic negligence. In a pilot study, we find that applying a Gaussian blur to the input image improves semantic adherence. Analyzing attention maps, we observe clearer foreground-background separation. From an energy perspective, this corresponds to a lower-entropy cross-attention distribution. Motivated by this, we introduce AlignVid, a training-free framework with two components: (i) Attention Scaling Modulation (ASM), which directly reweights attention via lightweight Q or K scaling, and (ii) Guidance Scheduling (GS), which applies ASM selectively across transformer blocks and denoising steps to reduce visual quality degradation. This minimal intervention improves prompt adherence while limiting aesthetic degradation. In addition, we introduce OmitI2V to evaluate semantic negligence in TI2V generation, comprising 367 human-annotated samples that span addition, deletion, and modification scenarios. Extensive experiments demonstrate that AlignVid can enhance semantic fidelity.