Motion Blender Gaussian Splatting for Dynamic Reconstruction

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Gaussian splatting methods rely on implicit motion modeling—e.g., neural networks or parametric encodings—resulting in non-editable motion representations that only support playback, thereby limiting applicability to dynamic scene reconstruction. This paper proposes an explicit, editable Gaussian motion modeling framework: for the first time, it integrates sparse motion graphs with dual-quaternion skinning to achieve geometrically consistent and decoupled motion control over Gaussians. We further design a learnable weight rasterization function and a jointly differentiable rendering optimization scheme, enabling end-to-end training. Our method achieves state-of-the-art performance on the iPhone dataset and competitive results on the HyperNeRF benchmark. Crucially, it enables novel motion synthesis, robotic motion retargeting and recombination, and other high-level editing tasks—capabilities absent in prior implicit approaches.

Technology Category

Application Category

📝 Abstract
Gaussian splatting has emerged as a powerful tool for high-fidelity reconstruction of dynamic scenes. However, existing methods primarily rely on implicit motion representations, such as encoding motions into neural networks or per-Gaussian parameters, which makes it difficult to further manipulate the reconstructed motions. This lack of explicit controllability limits existing methods to replaying recorded motions only, which hinders a wider application. To address this, we propose Motion Blender Gaussian Splatting (MB-GS), a novel framework that uses motion graph as an explicit and sparse motion representation. The motion of graph links is propagated to individual Gaussians via dual quaternion skinning, with learnable weight painting functions determining the influence of each link. The motion graphs and 3D Gaussians are jointly optimized from input videos via differentiable rendering. Experiments show that MB-GS achieves state-of-the-art performance on the iPhone dataset while being competitive on HyperNeRF. Additionally, we demonstrate the application potential of our method in generating novel object motions and robot demonstrations through motion editing. Video demonstrations can be found at https://mlzxy.github.io/mbgs.
Problem

Research questions and friction points this paper is trying to address.

Enhances dynamic scene reconstruction with explicit motion control.
Overcomes limitations of implicit motion representations in existing methods.
Enables novel motion generation and editing for broader applications.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Motion graph as explicit motion representation
Dual quaternion skinning for motion propagation
Differentiable rendering for joint optimization
🔎 Similar Papers
No similar papers found.
X
Xinyu Zhang
Department of Computer Science, Rutgers University
Haonan Chang
Haonan Chang
Rutgers University, Robotics Ph.D.
LLMVLM3D understandingManipulation
Y
Yuhan Liu
Department of Computer Science, Rutgers University
Abdeslam Boularias
Abdeslam Boularias
Rutgers University
RoboticsArtificial IntelligenceMachine LearningComputer Vision