Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of natural language–driven robotic assembly of multi-component objects. We propose an end-to-end framework integrating 3D generative AI with vision-language models (VLMs). Methodologically, we design a zero-shot multimodal reasoning mechanism that jointly performs geometric modeling and functional semantic understanding to enable component decomposition, structure-panel assignment, and interactive assembly planning—while supporting dynamic configuration optimization via conversational feedback. Our key contribution is the integration of VLMs into the closed-loop assembly reasoning pipeline, overcoming limitations of conventional rule- or geometry-driven paradigms. Experiments demonstrate that VLM-generated component assignments achieve 90.6% alignment with user preferences in human evaluation—significantly outperforming rule-based (59.4%) and random (2.5%) baselines—thereby validating the efficacy and practicality of semantic-guided robotic assembly.

Technology Category

Application Category

📝 Abstract
Advances in 3D generative AI have enabled the creation of physical objects from text prompts, but challenges remain in creating objects involving multiple component types. We present a pipeline that integrates 3D generative AI with vision-language models (VLMs) to enable the robotic assembly of multi-component objects from natural language. Our method leverages VLMs for zero-shot, multi-modal reasoning about geometry and functionality to decompose AI-generated meshes into multi-component 3D models using predefined structural and panel components. We demonstrate that a VLM is capable of determining which mesh regions need panel components in addition to structural components, based on the object's geometry and functionality. Evaluation across test objects shows that users preferred the VLM-generated assignments 90.6% of the time, compared to 59.4% for rule-based and 2.5% for random assignment. Lastly, the system allows users to refine component assignments through conversational feedback, enabling greater human control and agency in making physical objects with generative AI and robotics.
Problem

Research questions and friction points this paper is trying to address.

Robotic assembly of multi-component objects from text descriptions
Decomposing AI-generated 3D meshes into structural and panel components
Determining component assignments based on object functionality requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates 3D generative AI with vision-language models
Decomposes AI-generated meshes using predefined structural components
Enables conversational feedback for refining component assignments
🔎 Similar Papers
No similar papers found.
A
Alexander Htet Kyaw
Massachusetts Institute of Technology (MIT)
R
Richa Gupta
MIT
Dhruv Shah
Dhruv Shah
Princeton University, Google DeepMind
Robot LearningArtificial IntelligenceRoboticsReinforcement Learning
A
Anoop Sinha
Google, Paradigms of Intelligence
K
Kory Mathewson
Google DeepMind
S
Stefanie Pender
Autodesk Research
Sachin Chitta
Sachin Chitta
Director of Robotics Research, Autodesk
RoboticsManipulationMotion PlanningMobile Manipulation
Y
Yotto Koga
Autodesk Research
Faez Ahmed
Faez Ahmed
Associate Professor, MIT
Generative AIEngineering DesignMachine LearningEngineering OptimizationData-driven Design
L
Lawrence Sass
MIT Architecture
Randall Davis
Randall Davis
MIT CSAIL