Interspatial Attention for Efficient 4D Human Video Generation

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing digital human video generation methods suffer from temporal motion inconsistency, identity distortion, and limited controllability—particularly in multi-subject scenarios. To address these challenges, we propose Interspatial Attention (ISA), a novel attention mechanism tailored for 4D human video generation. ISA introduces relative positional encoding-based cross-spatial attention, seamlessly integrated into the DiT architecture and scalable for efficient inference. Coupled with a custom-designed video variational autoencoder, our latent diffusion model jointly conditions on camera parameters and pose sequences to enhance spatiotemporal coherence and semantic fidelity. Extensive experiments demonstrate significant improvements in multi-subject motion consistency and identity preservation over state-of-the-art methods, achieving new SOTA performance on standard benchmarks. To foster reproducibility and further research, we publicly release both source code and pre-trained models.

Technology Category

Application Category

📝 Abstract
Generating photorealistic videos of digital humans in a controllable manner is crucial for a plethora of applications. Existing approaches either build on methods that employ template-based 3D representations or emerging video generation models but suffer from poor quality or limited consistency and identity preservation when generating individual or multiple digital humans. In this paper, we introduce a new interspatial attention (ISA) mechanism as a scalable building block for modern diffusion transformer (DiT)--based video generation models. ISA is a new type of cross attention that uses relative positional encodings tailored for the generation of human videos. Leveraging a custom-developed video variation autoencoder, we train a latent ISA-based diffusion model on a large corpus of video data. Our model achieves state-of-the-art performance for 4D human video synthesis, demonstrating remarkable motion consistency and identity preservation while providing precise control of the camera and body poses. Our code and model are publicly released at https://dsaurus.github.io/isa4d/.
Problem

Research questions and friction points this paper is trying to address.

Improving quality and consistency in digital human video generation
Enhancing identity preservation across multiple digital humans
Achieving precise control over camera and body poses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces interspatial attention for human video generation
Uses diffusion transformer with relative positional encodings
Trains latent diffusion model on large video corpus
🔎 Similar Papers
No similar papers found.