Disentangled Hierarchical VAE for 3D Human-Human Interaction Generation

πŸ“… 2026-02-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing approaches often compress motion information into a single latent representation, which struggles to capture fine-grained actions and interaction semantics, frequently resulting in semantic misalignment and physical inconsistencies such as interpenetration or missing contacts. To address these limitations, this work proposes a Disentangled Hierarchical Variational Autoencoder (DHVAE) that explicitly separates global interaction context from individual motion patterns via a CoTransformer architecture. The method further integrates contrastive learning constraints with a skip-connected AdaLN-Transformer-enhanced DDIM diffusion denoising mechanism. This approach achieves the first explicit disentanglement of global and individual semantics in 3D human–human interaction generation, significantly outperforming existing methods in motion fidelity, text alignment, and physical plausibility while maintaining superior computational efficiency.

Technology Category

Application Category

πŸ“ Abstract
Generating realistic 3D Human-Human Interaction (HHI) requires coherent modeling of the physical plausibility of the agents and their interaction semantics. Existing methods compress all motion information into a single latent representation, limiting their ability to capture fine-grained actions and inter-agent interactions. This often leads to semantic misalignment and physically implausible artifacts, such as penetration or missed contact. We propose Disentangled Hierarchical Variational Autoencoder (DHVAE) based latent diffusion for structured and controllable HHI generation. DHVAE explicitly disentangles the global interaction context and individual motion patterns into a decoupled latent structure by employing a CoTransformer module. To mitigate implausible and physically inconsistent contacts in HHI, we incorporate contrastive learning constraints with our DHVAE to promote a more discriminative and physically plausible latent interaction space. For high-fidelity interaction synthesis, DHVAE employs a DDIM-based diffusion denoising process in the hierarchical latent space, enhanced by a skip-connected AdaLN-Transformer denoiser. Extensive evaluations show that DHVAE achieves superior motion fidelity, text alignment, and physical plausibility with greater computational efficiency.
Problem

Research questions and friction points this paper is trying to address.

3D Human-Human Interaction
motion disentanglement
physical plausibility
semantic alignment
latent representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled Representation
Hierarchical VAE
Latent Diffusion
Contrastive Learning
3D Human-Human Interaction
πŸ”Ž Similar Papers
No similar papers found.
Z
Zichen Geng
Department of CSSE, The University of Western Australia, Crawley WA 6009, Australia
Zeeshan Hayder
Zeeshan Hayder
Australian National University, Data61/CSIRO
Computer VisionMachine LearningAI
B
Bo Miao
Australian Institute for Machine Learning, The University of Adelaide, Adelaide SA 5000, Australia
Jian Liu
Jian Liu
PhD at HNU and UWA
3D Machine VisionRoboticsEmbodied AIObject Pose Estimation
Wei Liu
Wei Liu
Associate Professor, Computer Science, The University of Western Australia
Natural Language ProcessingLanguage ModelsText MiningData MiningMulti-Agent Systems
A
Ajmal Mian
Department of CSSE, The University of Western Australia, Crawley WA 6009, Australia