🤖 AI Summary
To address low sampling efficiency and slow convergence in black-box optimization for robotic embodiment design, this paper introduces large language models (LLMs) into multi-objective black-box optimization for the first time, proposing a knowledge-driven parallel design generation method. The method leverages LLMs to generate diverse, high-quality candidate designs conditioned on problem constraints and historical performance feedback; a feedback-guided prompting mechanism dynamically refines the search direction. Experiments demonstrate that our approach significantly improves sampling efficiency and directional search capability, achieving faster convergence to high-performance Pareto-optimal solutions with fewer iterations—thereby shortening the robotic design cycle. The core contribution lies in the synergistic modeling of LLMs and black-box optimization, enabling a paradigm shift from purely data-driven to knowledge- and data-hybrid-driven design.
📝 Abstract
Various methods for robot design optimization have been developed so far. These methods are diverse, ranging from numerical optimization to black-box optimization. While numerical optimization is fast, it is not suitable for cases involving complex structures or discrete values, leading to frequent use of black-box optimization instead. However, black-box optimization suffers from low sampling efficiency and takes considerable sampling iterations to obtain good solutions. In this study, we propose a method to enhance the efficiency of robot body design based on black-box optimization by utilizing large language models (LLMs). In parallel with the sampling process based on black-box optimization, sampling is performed using LLMs, which are provided with problem settings and extensive feedback. We demonstrate that this method enables more efficient exploration of design solutions and discuss its characteristics and limitations.