🤖 AI Summary
Dexterous hand morphology design relies heavily on expert intuition, incurs high computational costs in simulation-based optimization, and lacks semantic understanding of manipulation tasks.
Method: This paper introduces the first zero-shot, large language model (LLM)-driven framework for dexterous hand structural synthesis. It directly parses natural language task descriptions, generates symbolic hand structures, and maps them to Open-Source Prosthetic Hand (OPH)-compatible parameters. The framework jointly optimizes semantic alignment and dimensional compatibility, and supports LLM-guided iterative refinement—eliminating the need for simulation or manual parameter tuning.
Results: Experiments demonstrate that the framework automatically generates 3D-printable, functionally appropriate, and morphologically diverse dexterous hands across multiple manipulation tasks. It significantly improves design efficiency and generalization capability, enabling end-to-end, semantics-to-manufacturable-structure mapping without human intervention.
📝 Abstract
Designing robotic hand morphologies for diverse manipulation tasks requires balancing dexterity, manufacturability, and task-specific functionality. While open-source frameworks and parametric tools support reproducible design, they still rely on expert heuristics and manual tuning. Automated methods using optimization are often compute-intensive, simulation-dependent, and rarely target dexterous hands. Large language models (LLMs), with their broad knowledge of human-object interactions and strong generative capabilities, offer a promising alternative for zero-shot design reasoning. In this paper, we present Lang2Morph, a language-driven pipeline for robotic hand design. It uses LLMs to translate natural-language task descriptions into symbolic structures and OPH-compatible parameters, enabling 3D-printable task-specific morphologies. The pipeline consists of: (i) Morphology Design, which maps tasks into semantic tags, structural grammars, and OPH-compatible parameters; and (ii) Selection and Refinement, which evaluates design candidates based on semantic alignment and size compatibility, and optionally applies LLM-guided refinement when needed. We evaluate Lang2Morph across varied tasks, and results show that our approach can generate diverse, task-relevant morphologies. To our knowledge, this is the first attempt to develop an LLM-based framework for task-conditioned robotic hand design.