🤖 AI Summary
In text-to-video generation, achieving simultaneous human identity consistency, spatial layout coherence, and temporal motion smoothness remains challenging; existing end-to-end approaches suffer from inherent spatiotemporal optimization trade-offs. To address this, we propose a spatiotemporally decoupled two-stage generation framework: first, decomposing the text prompt into spatial (image generation) and temporal (video generation) semantic components; then, introducing a semantic prompt optimization mechanism alongside spatial- and temporal-separate feature modeling to jointly enhance identity fidelity and motion naturalness. Our method achieves second place in the ACM Multimedia Challenge 2025, attaining state-of-the-art performance in human identity consistency, text-video alignment, and overall visual quality.
📝 Abstract
Identity-preserving text-to-video (IPT2V) generation, which aims to create high-fidelity videos with consistent human identity, has become crucial for downstream applications. However, current end-to-end frameworks suffer a critical spatial-temporal trade-off: optimizing for spatially coherent layouts of key elements (e.g., character identity preservation) often compromises instruction-compliant temporal smoothness, while prioritizing dynamic realism risks disrupting the spatial coherence of visual structures. To tackle this issue, we propose a simple yet effective spatial-temporal decoupled framework that decomposes representations into spatial features for layouts and temporal features for motion dynamics. Specifically, our paper proposes a semantic prompt optimization mechanism and stage-wise decoupled generation paradigm. The former module decouples the prompt into spatial and temporal components. Aligned with the subsequent stage-wise decoupled approach, the spatial prompts guide the text-to-image (T2I) stage to generate coherent spatial features, while the temporal prompts direct the sequential image-to-video (I2V) stage to ensure motion consistency. Experimental results validate that our approach achieves excellent spatiotemporal consistency, demonstrating outstanding performance in identity preservation, text relevance, and video quality. By leveraging this simple yet robust mechanism, our algorithm secures the runner-up position in 2025 ACM MultiMedia Challenge.