SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-motion generation methods oversimplify skeletal structure, temporal dynamics, and textual semantics, resulting in insufficient cross-modal interaction; downstream editing tasks rely heavily on fine-tuning or manual intervention. This paper proposes Skeleton-Aware Latent Diffusion (SALD), a latent diffusion model that explicitly encodes joint topology and latent-space temporal dependencies to strengthen text-motion alignment. Crucially, SALD leverages cross-attention maps generated during the denoising process to enable zero-shot, text-driven motion editing—including motion redirection, token replacement, and insertion—without fine-tuning or optimization. Experiments demonstrate that SALD significantly outperforms state-of-the-art methods in both motion quality and text-motion consistency, while supporting diverse, fine-grained motion edits. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Text-driven motion generation has advanced significantly with the rise of denoising diffusion models. However, previous methods often oversimplify representations for the skeletal joints, temporal frames, and textual words, limiting their ability to fully capture the information within each modality and their interactions. Moreover, when using pre-trained models for downstream tasks, such as editing, they typically require additional efforts, including manual interventions, optimization, or fine-tuning. In this paper, we introduce a skeleton-aware latent diffusion (SALAD), a model that explicitly captures the intricate inter-relationships between joints, frames, and words. Furthermore, by leveraging cross-attention maps produced during the generation process, we enable attention-based zero-shot text-driven motion editing using a pre-trained SALAD model, requiring no additional user input beyond text prompts. Our approach significantly outperforms previous methods in terms of text-motion alignment without compromising generation quality, and demonstrates practical versatility by providing diverse editing capabilities beyond generation. Code is available at project page.
Problem

Research questions and friction points this paper is trying to address.

Enhances text-driven motion generation with skeleton-aware latent diffusion.
Addresses oversimplification of skeletal joints, temporal frames, and textual words.
Enables zero-shot text-driven motion editing without additional user input.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Skeleton-aware latent diffusion captures joint-frame-word relationships.
Cross-attention maps enable zero-shot text-driven motion editing.
Pre-trained SALAD model requires no additional user input.
🔎 Similar Papers
No similar papers found.
S
Seokhyeon Hong
Visual Media Lab, KAIST
C
Chaelin Kim
Visual Media Lab, KAIST
S
Serin Yoon
Visual Media Lab, KAIST
Junghyun Nam
Junghyun Nam
Visual Media Lab, KAIST
Sihun Cha
Sihun Cha
KAIST
Computer VisionComputer GraphicsFacial Animation
J
Jun-yong Noh
Visual Media Lab, KAIST