OAHuman: Occlusion-Aware 3D Human Reconstruction from Monocular Images

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of incomplete geometry and distorted textures in 3D human reconstruction from monocular images caused by occlusions. To mitigate this issue, we propose an occlusion-aware decoupled reconstruction framework that explicitly separates geometry recovery from texture synthesis. Our approach leverages information from visible regions to guide the reconstruction of occluded geometry while independently generating high-quality textures, thereby preventing mutual interference between the two tasks. Built upon neural implicit representations, the method achieves high-fidelity 3D human reconstruction even under severe occlusion. Extensive evaluations on multiple occlusion-heavy datasets demonstrate that our approach significantly outperforms existing methods in terms of structural completeness, surface detail, and texture realism.

Technology Category

Application Category

📝 Abstract
Monocular 3D human reconstruction in real-world scenarios remains highly challenging due to frequent occlusions from surrounding objects, people, or image truncation. Such occlusions lead to missing geometry and unreliable appearance cues, severely degrading the completeness and realism of reconstructed human models. Although recent neural implicit methods achieve impressive results on clean inputs, they struggle under occlusion due to entangled modeling of shape and texture. In this paper, we propose OAHuman, an occlusion-aware framework that explicitly decouples geometry reconstruction and texture synthesis for robust 3D human modeling from a single RGB image. The core innovation lies in the decoupling-perception paradigm, which addresses the fundamental issue of geometry-texture cross-contamination in occluded regions. Our framework ensures that geometry reconstruction is perceptually reinforced even in occluded areas, isolating it from texture interference. In parallel, texture synthesis is learned exclusively from visible regions, preventing texture errors from being transferred to the occluded areas. This decoupling approach enables OAHuman to achieve robust and high-fidelity reconstruction under occlusion, which has been a long-standing challenge in the field. Extensive experiments on occlusion-rich benchmarks demonstrate that OAHuman achieves superior performance in terms of structural completeness, surface detail, and texture realism, significantly improving monocular 3D human reconstruction under occlusion conditions.
Problem

Research questions and friction points this paper is trying to address.

occlusion
3D human reconstruction
monocular images
geometry-texture decoupling
real-world scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

occlusion-aware
decoupling-perception
3D human reconstruction
monocular image
geometry-texture disentanglement
🔎 Similar Papers
No similar papers found.
Y
Yuanwang Yang
College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
H
Hongliang Liu
College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
M
Muxin Zhang
College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
Nan Ma
Nan Ma
Beijing University of Posts and Telecommunications
J
Jingyu Yang
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
Yu-Kun Lai
Yu-Kun Lai
Professor, Cardiff University
Geometric ModelingGeometry ProcessingComputer GraphicsImage ProcessingComputer Vision
Kun Li
Kun Li
Professor in Tianjin University
computer visioncomputer graphicsimage and video processing