π€ AI Summary
Existing robotic design methodologies heavily rely on expert knowledge and rule-based systems, limiting their adaptability to diverse user requirements. This work proposes the first end-to-end automated design framework grounded in large-scale vision-language models. Starting from user-provided textual prompts and reference images, the framework generates initial robot designs and iteratively refines them through a cross-modal optimization process that integrates bio-inspiration, kinematic feasibility constraints, and visual feedback. The approach successfully produces a variety of biomimetic, kinematically viable, and visually appealing robot designs. Ablation studies and user evaluations demonstrate the methodβs effectiveness and superiority over conventional design paradigms.
π Abstract
Robot design is a nontrivial process that involves careful consideration of multiple criteria, including user specifications, kinematic structures, and visual appearance. Therefore, the design process often relies heavily on domain expertise and significant human effort. The majority of current methods are rule-based, requiring the specification of a grammar or a set of primitive components and modules that can be composed to create a design. We propose a novel automated robot design framework, RobotDesignGPT, that leverages the general knowledge and reasoning capabilities of large pre-trained vision-language models to automate the robot design synthesis process. Our framework synthesizes an initial robot design from a simple user prompt and a reference image. Our novel visual feedback approach allows us to greatly improve the design quality and reduce unnecessary manual feedback. We demonstrate that our framework can design visually appealing and kinematically valid robots inspired by nature, ranging from legged animals to flying creatures. We justify the proposed framework by conducting an ablation study and a user study.