Identity-Preserving Text-to-Video Generation Guided by Simple yet Effective Spatial-Temporal Decoupled Representations

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In text-to-video generation, achieving simultaneous human identity consistency, spatial layout coherence, and temporal motion smoothness remains challenging; existing end-to-end approaches suffer from inherent spatiotemporal optimization trade-offs. To address this, we propose a spatiotemporally decoupled two-stage generation framework: first, decomposing the text prompt into spatial (image generation) and temporal (video generation) semantic components; then, introducing a semantic prompt optimization mechanism alongside spatial- and temporal-separate feature modeling to jointly enhance identity fidelity and motion naturalness. Our method achieves second place in the ACM Multimedia Challenge 2025, attaining state-of-the-art performance in human identity consistency, text-video alignment, and overall visual quality.

Technology Category

Application Category

📝 Abstract
Identity-preserving text-to-video (IPT2V) generation, which aims to create high-fidelity videos with consistent human identity, has become crucial for downstream applications. However, current end-to-end frameworks suffer a critical spatial-temporal trade-off: optimizing for spatially coherent layouts of key elements (e.g., character identity preservation) often compromises instruction-compliant temporal smoothness, while prioritizing dynamic realism risks disrupting the spatial coherence of visual structures. To tackle this issue, we propose a simple yet effective spatial-temporal decoupled framework that decomposes representations into spatial features for layouts and temporal features for motion dynamics. Specifically, our paper proposes a semantic prompt optimization mechanism and stage-wise decoupled generation paradigm. The former module decouples the prompt into spatial and temporal components. Aligned with the subsequent stage-wise decoupled approach, the spatial prompts guide the text-to-image (T2I) stage to generate coherent spatial features, while the temporal prompts direct the sequential image-to-video (I2V) stage to ensure motion consistency. Experimental results validate that our approach achieves excellent spatiotemporal consistency, demonstrating outstanding performance in identity preservation, text relevance, and video quality. By leveraging this simple yet robust mechanism, our algorithm secures the runner-up position in 2025 ACM MultiMedia Challenge.
Problem

Research questions and friction points this paper is trying to address.

Spatial-temporal trade-off in identity-preserving video generation
Balancing spatial coherence and temporal smoothness in videos
Decoupling spatial and temporal features for consistent video output
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatial-temporal decoupled framework for video generation
Semantic prompt optimization mechanism for feature decoupling
Stage-wise generation paradigm ensuring spatiotemporal consistency
🔎 Similar Papers
No similar papers found.
Y
Yuji Wang
Tencent YouTu Lab, Shanghai Jiao Tong University
Moran Li
Moran Li
Researcher at YouTu Lab, Tencent
Video GenerationGenerative AIComputer Graphics
Xiaobin Hu
Xiaobin Hu
Tencent Youtu Lab;Technische Universität München (TUM)
Deep learningComputer visionVLMAgents
Ran Yi
Ran Yi
Associate Professor, Shanghai Jiao Tong University
Computer VisionComputer Graphics
J
Jiangning Zhang
Tencent YouTu Lab
H
Han Feng
Tencent YouTu Lab
Weijian Cao
Weijian Cao
Tencent
CVCG
Y
Yabiao Wang
Tencent YouTu Lab
C
Chengjie Wang
Tencent YouTu Lab
L
Lizhuang Ma
Shanghai Jiao Tong University