🤖 AI Summary
Existing 3D grasp generation methods neglect functional semantics, resulting in grasps lacking human-intent-driven functional plausibility. To address this, we propose FGS-Net, a functional-semantics-driven two-stage framework that— for the first time—enables end-to-end generation of semantically plausible and physically feasible 3D hand–object interactions solely from natural language functional descriptions. Our method integrates a text encoder, a conditional 3D generative network (FGG), an object pose approximator, and an energy-based pose refinement module (FGR), operating without any 3D supervision. It synthesizes high-fidelity, temporally coherent interaction sequences with high geometric accuracy. Experiments demonstrate substantial improvements in both semantic plausibility and physical feasibility over state-of-the-art baselines on functional grasping tasks.
📝 Abstract
Hand-object interaction(HOI) is the fundamental link between human and environment, yet its dexterous and complex pose significantly challenges for gesture control. Despite significant advances in AI and robotics, enabling machines to understand and simulate hand-object interactions, capturing the semantics of functional grasping tasks remains a considerable challenge. While previous work can generate stable and correct 3D grasps, they are still far from achieving functional grasps due to unconsidered grasp semantics. To address this challenge, we propose an innovative two-stage framework, Functional Grasp Synthesis Net (FGS-Net), for generating 3D HOI driven by functional text. This framework consists of a text-guided 3D model generator, Functional Grasp Generator (FGG), and a pose optimization strategy, Functional Grasp Refiner (FGR). FGG generates 3D models of hands and objects based on text input, while FGR fine-tunes the poses using Object Pose Approximator and energy functions to ensure the relative position between the hand and object aligns with human intent and remains physically plausible. Extensive experiments demonstrate that our approach achieves precise and high-quality HOI generation without requiring additional 3D annotation data.