Learning Context-Adaptive Motion Priors for Masked Motion Diffusion Models with Efficient Kinematic Attention Aggregation

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of joint occlusion in visual motion capture and the high noise levels and reliance on manual correction in wearable sensor data. To this end, the authors propose the Masked Motion Diffusion Model (MMDM), which integrates a masked autoencoder with a diffusion generative framework. MMDM introduces a Kinematic Attention Aggregation (KAA) mechanism to efficiently encode joint and pose features and leverages context-adaptive motion priors for task-specific reconstruction. The model requires no architectural modifications to support diverse motion generation tasks, achieving state-of-the-art performance in motion completion, refinement, and in-betweening. Furthermore, MMDM demonstrates strong robustness across various masking strategies, highlighting its versatility and effectiveness in handling incomplete or noisy motion data.

Technology Category

Application Category

📝 Abstract
Vision-based motion capture solutions often struggle with occlusions, which result in the loss of critical joint information and hinder accurate 3D motion reconstruction. Other wearable alternatives also suffer from noisy or unstable data, often requiring extensive manual cleaning and correction to achieve reliable results. To address these challenges, we introduce the Masked Motion Diffusion Model (MMDM), a diffusion-based generative reconstruction framework that enhances incomplete or low-confidence motion data using partially available high-quality reconstructions within a Masked Autoencoder architecture. Central to our design is the Kinematic Attention Aggregation (KAA) mechanism, which enables efficient, deep, and iterative encoding of both joint-level and pose-level features, capturing structural and temporal motion patterns essential for task-specific reconstruction. We focus on learning context-adaptive motion priors, specialized structural and temporal features extracted by the same reusable architecture, where each learned prior emphasizes different aspects of motion dynamics and is specifically efficient for its corresponding task. This enables the architecture to adaptively specialize without altering its structure. Such versatility allows MMDM to efficiently learn motion priors tailored to scenarios such as motion refinement, completion, and in-betweening. Extensive evaluations on public benchmarks demonstrate that MMDM achieves strong performance across diverse masking strategies and task settings. The source code is available at https://github.com/jjkislele/MMDM.
Problem

Research questions and friction points this paper is trying to address.

occlusions
motion capture
noisy data
3D motion reconstruction
motion priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Motion Diffusion Model
Kinematic Attention Aggregation
Context-Adaptive Motion Priors
Motion Reconstruction
Masked Autoencoder
🔎 Similar Papers
No similar papers found.
Junkun Jiang
Junkun Jiang
Hong Kong Baptist University
Computer VisionHuman Pose EstimationMotion Capture
Jie Chen
Jie Chen
Hong Kong Baptist Univesrity
Computational PhotographyMultimedia3DArt-Tech
H
Ho Yin Au
Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China
J
Jingyu Xiang
Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China