🤖 AI Summary
Existing dexterous grasp generation methods struggle to jointly model grasp categorization, contact semantics, and functional affordances, resulting in weak semantic controllability and poor human interpretability. This paper proposes a multimodal semantic-aware framework that, for the first time, jointly embeds these three semantic dimensions into a vision-language model, enabling fine-grained, natural-language-instruction-driven grasp control. We introduce multi-agent collaborative reasoning, retrieval-augmented generation, and chain-of-thought prompting, integrated with physics-based optimization and category-aware differential force-closure sampling to ensure pose feasibility and diversity. Evaluated in both simulation and real-robot settings, our method significantly outperforms state-of-the-art approaches, improving semantic consistency (+28.6%), contact structural richness (+34.1%), and functional affordance coverage (+41.3%).
📝 Abstract
Dexterous grasp generation aims to produce grasp poses that align with task requirements and human interpretable grasp semantics. However, achieving semantically controllable dexterous grasp synthesis remains highly challenging due to the lack of unified modeling of multiple semantic dimensions, including grasp taxonomy, contact semantics, and functional affordance. To address these limitations, we present OmniDexVLG, a multimodal, semantics aware grasp generation framework capable of producing structurally diverse and semantically coherent dexterous grasps under joint language and visual guidance. Our approach begins with OmniDexDataGen, a semantic rich dexterous grasp dataset generation pipeline that integrates grasp taxonomy guided configuration sampling, functional affordance contact point sampling, taxonomy aware differential force closure grasp sampling, and physics based optimization and validation, enabling systematic coverage of diverse grasp types. We further introduce OmniDexReasoner, a multimodal grasp type semantic reasoning module that leverages multi agent collaboration, retrieval augmented generation, and chain of thought reasoning to infer grasp related semantics and generate high quality annotations that align language instructions with task specific grasp intent. Building upon these components, we develop a unified Vision Language Grasping generation model that explicitly incorporates grasp taxonomy, contact structure, and functional affordance semantics, enabling fine grained control over grasp synthesis from natural language instructions. Extensive experiments in simulation and real world object grasping and ablation studies demonstrate that our method substantially outperforms state of the art approaches in terms of grasp diversity, contact semantic diversity, functional affordance diversity, and semantic consistency.