🤖 AI Summary
Generating 3D hand-object interaction meshes from text that are both visually realistic and physically plausible is challenged by the ill-posed nature of mesh extraction and difficulties in physical optimization. This work proposes THOM, a training-free two-stage framework that first constructs Gaussian representations of the hand and object, then produces high-quality interaction meshes through physics-based optimization without relying on predefined object templates. Key innovations include topology-aware mesh extraction, a vertex-to-Gaussian mapping mechanism, vision-language model-guided displacement optimization, and contact-aware physical refinement. Experiments demonstrate that THOM outperforms state-of-the-art methods in text alignment, visual fidelity, and physical plausibility of hand-object interactions.
📝 Abstract
The generation of 3D hand-object interactions (HOIs) from text is crucial for dexterous robotic grasping and VR/AR content generation, requiring both high visual fidelity and physical plausibility. Nevertheless, the ill-posed problem of mesh extraction from text-generated Gaussians, and physics-based optimization on the erroneous meshes pose challenges. To address these issues, we introduce THOM, a training-free framework that generates photorealistic, physically plausible 3D HOI meshes without the need for a template object mesh. THOM employs a two-stage pipeline, initially generating the hand and object Gaussians, followed by physics-based HOI optimization. Our new mesh extraction method and vertex-to-Gaussian mapping explicitly assign Gaussian elements to mesh vertices, allowing topology-aware regularization. Furthermore, we improve the physical plausibility of interactions by VLM-guided translation refinement and contact-aware optimization. Comprehensive experiments demonstrate that THOM consistently surpasses state-of-the-art methods in terms of text alignment, visual realism, and interaction plausibility.