🤖 AI Summary
Existing digital human video generation methods suffer from temporal motion inconsistency, identity distortion, and limited controllability—particularly in multi-subject scenarios. To address these challenges, we propose Interspatial Attention (ISA), a novel attention mechanism tailored for 4D human video generation. ISA introduces relative positional encoding-based cross-spatial attention, seamlessly integrated into the DiT architecture and scalable for efficient inference. Coupled with a custom-designed video variational autoencoder, our latent diffusion model jointly conditions on camera parameters and pose sequences to enhance spatiotemporal coherence and semantic fidelity. Extensive experiments demonstrate significant improvements in multi-subject motion consistency and identity preservation over state-of-the-art methods, achieving new SOTA performance on standard benchmarks. To foster reproducibility and further research, we publicly release both source code and pre-trained models.
📝 Abstract
Generating photorealistic videos of digital humans in a controllable manner is crucial for a plethora of applications. Existing approaches either build on methods that employ template-based 3D representations or emerging video generation models but suffer from poor quality or limited consistency and identity preservation when generating individual or multiple digital humans. In this paper, we introduce a new interspatial attention (ISA) mechanism as a scalable building block for modern diffusion transformer (DiT)--based video generation models. ISA is a new type of cross attention that uses relative positional encodings tailored for the generation of human videos. Leveraging a custom-developed video variation autoencoder, we train a latent ISA-based diffusion model on a large corpus of video data. Our model achieves state-of-the-art performance for 4D human video synthesis, demonstrating remarkable motion consistency and identity preservation while providing precise control of the camera and body poses. Our code and model are publicly released at https://dsaurus.github.io/isa4d/.