OmniDexVLG: Learning Dexterous Grasp Generation from Vision Language Model-Guided Grasp Semantics, Taxonomy and Functional Affordance

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing dexterous grasp generation methods struggle to jointly model grasp categorization, contact semantics, and functional affordances, resulting in weak semantic controllability and poor human interpretability. This paper proposes a multimodal semantic-aware framework that, for the first time, jointly embeds these three semantic dimensions into a vision-language model, enabling fine-grained, natural-language-instruction-driven grasp control. We introduce multi-agent collaborative reasoning, retrieval-augmented generation, and chain-of-thought prompting, integrated with physics-based optimization and category-aware differential force-closure sampling to ensure pose feasibility and diversity. Evaluated in both simulation and real-robot settings, our method significantly outperforms state-of-the-art approaches, improving semantic consistency (+28.6%), contact structural richness (+34.1%), and functional affordance coverage (+41.3%).

Technology Category

Application Category

📝 Abstract
Dexterous grasp generation aims to produce grasp poses that align with task requirements and human interpretable grasp semantics. However, achieving semantically controllable dexterous grasp synthesis remains highly challenging due to the lack of unified modeling of multiple semantic dimensions, including grasp taxonomy, contact semantics, and functional affordance. To address these limitations, we present OmniDexVLG, a multimodal, semantics aware grasp generation framework capable of producing structurally diverse and semantically coherent dexterous grasps under joint language and visual guidance. Our approach begins with OmniDexDataGen, a semantic rich dexterous grasp dataset generation pipeline that integrates grasp taxonomy guided configuration sampling, functional affordance contact point sampling, taxonomy aware differential force closure grasp sampling, and physics based optimization and validation, enabling systematic coverage of diverse grasp types. We further introduce OmniDexReasoner, a multimodal grasp type semantic reasoning module that leverages multi agent collaboration, retrieval augmented generation, and chain of thought reasoning to infer grasp related semantics and generate high quality annotations that align language instructions with task specific grasp intent. Building upon these components, we develop a unified Vision Language Grasping generation model that explicitly incorporates grasp taxonomy, contact structure, and functional affordance semantics, enabling fine grained control over grasp synthesis from natural language instructions. Extensive experiments in simulation and real world object grasping and ablation studies demonstrate that our method substantially outperforms state of the art approaches in terms of grasp diversity, contact semantic diversity, functional affordance diversity, and semantic consistency.
Problem

Research questions and friction points this paper is trying to address.

Generating dexterous grasps aligned with task requirements and semantics
Lacking unified modeling of grasp taxonomy, contact, and functional affordance
Enabling fine-grained semantic control from language and visual guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates semantic-rich grasp dataset via taxonomy-guided sampling pipeline
Uses multimodal reasoning with agent collaboration for grasp intent alignment
Unifies vision-language model for fine-grained grasp control from instructions
🔎 Similar Papers
No similar papers found.
L
Lei Zhang
University of Hamburg
D
Diwen Zheng
Agile Robots SE
K
Kaixin Bai
University of Hamburg
Zhenshan Bing
Zhenshan Bing
Nanjing University / Technical University of Munich
Robotics
Z
Zoltán-Csaba Márton
Agile Robots SE
Z
Zhaopeng Chen
Agile Robots SE
Alois Knoll
Alois Knoll
Technische Universität München
RoboticsAISensor Data FusionAutonomous DrivingCyber Physical Systems
J
Jianwei Zhang
University of Hamburg