InterEdit: Navigating Text-Guided Multi-Human 3D Motion Editing

πŸ“… 2026-03-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing text-guided 3D motion editing methods struggle with multi-person scenarios due to the scarcity of paired data and the complexity of modeling interpersonal interactions. To address this, this work proposes InterEditβ€”a synchronous classifier-free conditional diffusion model that enables text-driven editing of multi-person 3D motions. We introduce InterEdit3D, the first dataset annotated with two-person interactive motion variations, along with TMME, a new evaluation benchmark. Our approach innovatively integrates a semantic-aware plan token alignment mechanism and an interaction-aware frequency-domain token alignment strategy, leveraging learnable tokens, Discrete Cosine Transform (DCT), and energy pooling to effectively capture high-level interaction semantics and periodic motion dynamics. Experiments demonstrate that InterEdit significantly outperforms existing methods in both text-motion alignment and editing fidelity, achieving state-of-the-art performance.

Technology Category

Application Category

πŸ“ Abstract
Text-guided 3D motion editing has seen success in single-person scenarios, but its extension to multi-person settings is less explored due to limited paired data and the complexity of inter-person interactions. We introduce the task of multi-person 3D motion editing, where a target motion is generated from a source and a text instruction. To support this, we propose InterEdit3D, a new dataset with manual two-person motion change annotations, and a Text-guided Multi-human Motion Editing (TMME) benchmark. We present InterEdit, a synchronized classifier-free conditional diffusion model for TMME. It introduces Semantic-Aware Plan Token Alignment with learnable tokens to capture high-level interaction cues and an Interaction-Aware Frequency Token Alignment strategy using DCT and energy pooling to model periodic motion dynamics. Experiments show that InterEdit improves text-to-motion consistency and edit fidelity, achieving state-of-the-art TMME performance. The dataset and code will be released at https://github.com/YNG916/InterEdit.
Problem

Research questions and friction points this paper is trying to address.

text-guided motion editing
multi-human 3D motion
inter-person interaction
motion editing
3D human motion
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-human 3D motion editing
conditional diffusion model
Semantic-Aware Plan Token Alignment
Interaction-Aware Frequency Token Alignment
text-guided motion generation
πŸ”Ž Similar Papers
No similar papers found.