RobotDesignGPT: Automated Robot Design Synthesis using Vision Language Models

πŸ“… 2026-01-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing robotic design methodologies heavily rely on expert knowledge and rule-based systems, limiting their adaptability to diverse user requirements. This work proposes the first end-to-end automated design framework grounded in large-scale vision-language models. Starting from user-provided textual prompts and reference images, the framework generates initial robot designs and iteratively refines them through a cross-modal optimization process that integrates bio-inspiration, kinematic feasibility constraints, and visual feedback. The approach successfully produces a variety of biomimetic, kinematically viable, and visually appealing robot designs. Ablation studies and user evaluations demonstrate the method’s effectiveness and superiority over conventional design paradigms.

Technology Category

Application Category

πŸ“ Abstract
Robot design is a nontrivial process that involves careful consideration of multiple criteria, including user specifications, kinematic structures, and visual appearance. Therefore, the design process often relies heavily on domain expertise and significant human effort. The majority of current methods are rule-based, requiring the specification of a grammar or a set of primitive components and modules that can be composed to create a design. We propose a novel automated robot design framework, RobotDesignGPT, that leverages the general knowledge and reasoning capabilities of large pre-trained vision-language models to automate the robot design synthesis process. Our framework synthesizes an initial robot design from a simple user prompt and a reference image. Our novel visual feedback approach allows us to greatly improve the design quality and reduce unnecessary manual feedback. We demonstrate that our framework can design visually appealing and kinematically valid robots inspired by nature, ranging from legged animals to flying creatures. We justify the proposed framework by conducting an ablation study and a user study.
Problem

Research questions and friction points this paper is trying to address.

robot design
automated synthesis
vision-language models
design automation
kinematic structures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language Models
Automated Robot Design
Visual Feedback
Design Synthesis
Kinematic Validity
πŸ”Ž Similar Papers
No similar papers found.
N
Nitish Sontakke
School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, 30308, USA
K
K. N. Kumar
School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, 30308, USA
Sehoon Ha
Sehoon Ha
Georgia Institute of Technology
roboticscomputer graphicsmachine learning