Articulate That Object Part (ATOP): 3D Part Articulation from Text and Motion Personalization

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses text-driven local part motion generation for 3D objects—i.e., precisely controlling the motion of specific parts of a target 3D model via natural language instructions. We propose the first framework integrating text guidance with multi-view rendering for motion personalization: (1) a video diffusion model generates reference motions conditioned on text; (2) these motions are transferred and refined onto the target 3D mesh using differentiable rendering and Score Distillation Sampling (SDS) loss; and (3) to overcome the video diffusion model’s lack of part-level motion awareness, we introduce a few-shot category-adaptive fine-tuning strategy. Experiments demonstrate high-fidelity motion video synthesis for unseen object–motion combinations, significantly improving part motion parameter prediction accuracy and cross-category generalization capability.

Technology Category

Application Category

📝 Abstract
We present ATOP (Articulate That Object Part), a novel method based on motion personalization to articulate a 3D object with respect to a part and its motion as prescribed in a text prompt. Specifically, the text input allows us to tap into the power of modern-day video diffusion to generate plausible motion samples for the right object category and part. In turn, the input 3D object provides image prompting to personalize the generated video to that very object we wish to articulate. Our method starts with a few-shot finetuning for category-specific motion generation, a key first step to compensate for the lack of articulation awareness by current video diffusion models. For this, we finetune a pre-trained multi-view image generation model for controllable multi-view video generation, using a small collection of video samples obtained for the target object category. This is followed by motion video personalization that is realized by multi-view rendered images of the target 3D object. At last, we transfer the personalized video motion to the target 3D object via differentiable rendering to optimize part motion parameters by a score distillation sampling loss. We show that our method is capable of generating realistic motion videos and predict 3D motion parameters in a more accurate and generalizable way, compared to prior works.
Problem

Research questions and friction points this paper is trying to address.

3D object part articulation
motion personalization from text
video diffusion model enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Motion personalization for 3D articulation
Few-shot finetuning for motion generation
Differentiable rendering for motion transfer
🔎 Similar Papers
No similar papers found.
A
Aditya Vora
Simon Fraser University, Canada
Sauradip Nag
Sauradip Nag
CVSSP, University of Surrey
Computer VisionComputer GraphicsDeep Learning
H
Hao Zhang
Simon Fraser University, Canada