Multi-Person Interaction Generation from Two-Person Motion Priors

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for generating multi-person social interactions require retraining models per scenario and suffer from interpenetration artifacts and motion mode collapse. Method: This paper proposes a graph-driven interactive sampling framework that decomposes multi-person motion into a Pairwise Interaction Graph (PIG) composed of dyadic interactions, leveraging a pretrained two-person motion diffusion model as a prior—eliminating the need for dedicated multi-person model training. Contribution/Results: We introduce novel graph-structured modeling and a dual-conditioning guidance mechanism that jointly enforces kinematic feasibility and graph-structured interaction dependencies during sampling. This effectively suppresses interpenetration and enhances motion diversity. Experiments demonstrate significant improvements over state-of-the-art methods across diverse two-person and multi-person interaction scenarios, enabling high-fidelity, controllable, and diverse multi-agent coordinated motion synthesis.

Technology Category

Application Category

📝 Abstract
Generating realistic human motion with high-level controls is a crucial task for social understanding, robotics, and animation. With high-quality MOCAP data becoming more available recently, a wide range of data-driven approaches have been presented. However, modelling multi-person interactions still remains a less explored area. In this paper, we present Graph-driven Interaction Sampling, a method that can generate realistic and diverse multi-person interactions by leveraging existing two-person motion diffusion models as motion priors. Instead of training a new model specific to multi-person interaction synthesis, our key insight is to spatially and temporally separate complex multi-person interactions into a graph structure of two-person interactions, which we name the Pairwise Interaction Graph. We thus decompose the generation task into simultaneous single-person motion generation conditioned on one other's motion. In addition, to reduce artifacts such as interpenetrations of body parts in generated multi-person interactions, we introduce two graph-dependent guidance terms into the diffusion sampling scheme. Unlike previous work, our method can produce various high-quality multi-person interactions without having repetitive individual motions. Extensive experiments demonstrate that our approach consistently outperforms existing methods in reducing artifacts when generating a wide range of two-person and multi-person interactions.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic multi-person interactions from two-person motion priors
Reducing artifacts like body interpenetrations in generated interactions
Decomposing complex interactions into pairwise graphs for diverse outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages two-person motion diffusion models
Uses Pairwise Interaction Graph structure
Introduces graph-dependent guidance terms
🔎 Similar Papers
No similar papers found.