TeHOR: Text-Guided 3D Human and Object Reconstruction with Textures

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for 3D human-object interaction reconstruction rely heavily on physical contact cues and neglect global appearance context, making them ill-suited for non-contact interactions. This work proposes a text-guided joint 3D human-object reconstruction approach that, for the first time, integrates textual semantics and global appearance context into the reconstruction pipeline. By aligning text with 3D semantics and modeling appearance-based contextual relationships, the method unifies the handling of both contact and non-contact interactions. It overcomes the dependency on physical contact by jointly leveraging geometric and semantic information to produce textured, high-fidelity 3D scenes. The approach achieves state-of-the-art performance across multiple metrics, generating semantically consistent and visually plausible 3D interaction scenarios.

Technology Category

Application Category

📝 Abstract
Joint reconstruction of 3D human and object from a single image is an active research area, with pivotal applications in robotics and digital content creation. Despite recent advances, existing approaches suffer from two fundamental limitations. First, their reconstructions rely heavily on physical contact information, which inherently cannot capture non-contact human-object interactions, such as gazing at or pointing toward an object. Second, the reconstruction process is primarily driven by local geometric proximity, neglecting the human and object appearances that provide global context crucial for understanding holistic interactions. To address these issues, we introduce TeHOR, a framework built upon two core designs. First, beyond contact information, our framework leverages text descriptions of human-object interactions to enforce semantic alignment between the 3D reconstruction and its textual cues, enabling reasoning over a wider spectrum of interactions, including non-contact cases. Second, we incorporate appearance cues of the 3D human and object into the alignment process to capture holistic contextual information, thereby ensuring visually plausible reconstructions. As a result, our framework produces accurate and semantically coherent reconstructions, achieving state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

3D human-object reconstruction
non-contact interaction
semantic alignment
appearance cues
single-image reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

text-guided reconstruction
3D human-object interaction
semantic alignment
appearance cues
non-contact interaction
🔎 Similar Papers
No similar papers found.