Stable Diffusion-Based Approach for Human De-Occlusion

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of reconstructing human structure and appearance under severe occlusion. We propose a two-stage de-occlusion framework: in the first stage, a diffusion model learns human structural priors and performs geometry-consistent completion guided by joint heatmaps and an observability mask; in the second stage, a text-enhanced RGB generation mechanism is introduced—semantic cues are extracted via visual question answering (VQA), textual priors are encoded using CLIP, and a fine-tuned decoder mitigates latent-space degradation. Built upon Stable Diffusion as the generative backbone, our method significantly improves reconstruction quality in heavily occluded regions. Consequently, it substantially enhances downstream tasks including 2D pose estimation and 3D human reconstruction. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks.

Technology Category

Application Category

📝 Abstract
Humans can infer the missing parts of an occluded object by leveraging prior knowledge and visible cues. However, enabling deep learning models to accurately predict such occluded regions remains a challenging task. De-occlusion addresses this problem by reconstructing both the mask and RGB appearance. In this work, we focus on human de-occlusion, specifically targeting the recovery of occluded body structures and appearances. Our approach decomposes the task into two stages: mask completion and RGB completion. The first stage leverages a diffusion-based human body prior to provide a comprehensive representation of body structure, combined with occluded joint heatmaps that offer explicit spatial cues about missing regions. The reconstructed amodal mask then serves as a conditioning input for the second stage, guiding the model on which areas require RGB reconstruction. To further enhance RGB generation, we incorporate human-specific textual features derived using a visual question answering (VQA) model and encoded via a CLIP encoder. RGB completion is performed using Stable Diffusion, with decoder fine-tuning applied to mitigate pixel-level degradation in visible regions -- a known limitation of prior diffusion-based de-occlusion methods caused by latent space transformations. Our method effectively reconstructs human appearances even under severe occlusions and consistently outperforms existing methods in both mask and RGB completion. Moreover, the de-occluded images generated by our approach can improve the performance of downstream human-centric tasks, such as 2D pose estimation and 3D human reconstruction. The code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Recovering occluded human body structures and appearances
Decomposing de-occlusion into mask and RGB completion stages
Enhancing RGB generation with human-specific textual features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion-based human body prior
Incorporates human-specific textual features
Fine-tunes Stable Diffusion for RGB completion
S
Seung Young Noh
Kwangwoon University, Seoul, South Korea
Ju Yong Chang
Ju Yong Chang
Professor, Kwangwoon University
Computer VisionMachine Learning