Depth-Copy-Paste: Multimodal and Depth-Aware Compositing for Robust Face Detection

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional copy-paste data augmentation for face detection often yields distorted synthetic images due to inaccurate foreground segmentation, geometric misalignment, and semantically inconsistent backgrounds. To address these issues, we propose a depth-guided multimodal synthesis framework. Our method introduces a novel depth-guided sliding-window pasting mechanism to ensure physical plausibility; integrates BLIP and CLIP for cross-modal semantic-visual joint retrieval; and leverages SAM3 and Depth-Anything to extract occlusion-free, visible human regions—preserving facial texture fidelity while optimizing depth map alignment. This approach significantly enhances the robustness of face detectors under challenging conditions, including heavy occlusion, low illumination, and complex backgrounds. On WIDER FACE, it achieves absolute mAP gains of 3.2–5.7% over state-of-the-art depth-free augmentation methods.

Technology Category

Application Category

📝 Abstract
Data augmentation is crucial for improving the robustness of face detection systems, especially under challenging conditions such as occlusion, illumination variation, and complex environments. Traditional copy paste augmentation often produces unrealistic composites due to inaccurate foreground extraction, inconsistent scene geometry, and mismatched background semantics. To address these limitations, we propose Depth Copy Paste, a multimodal and depth aware augmentation framework that generates diverse and physically consistent face detection training samples by copying full body person instances and pasting them into semantically compatible scenes. Our approach first employs BLIP and CLIP to jointly assess semantic and visual coherence, enabling automatic retrieval of the most suitable background images for the given foreground person. To ensure high quality foreground masks that preserve facial details, we integrate SAM3 for precise segmentation and Depth-Anything to extract only the non occluded visible person regions, preventing corrupted facial textures from being used in augmentation. For geometric realism, we introduce a depth guided sliding window placement mechanism that searches over the background depth map to identify paste locations with optimal depth continuity and scale alignment. The resulting composites exhibit natural depth relationships and improved visual plausibility. Extensive experiments show that Depth Copy Paste provides more diverse and realistic training data, leading to significant performance improvements in downstream face detection tasks compared with traditional copy paste and depth free augmentation methods.
Problem

Research questions and friction points this paper is trying to address.

Generates realistic training samples for face detection
Ensures semantic and geometric consistency in data augmentation
Improves robustness against occlusion and environmental variations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal semantic coherence assessment using BLIP and CLIP
Depth-aware foreground segmentation with SAM3 and Depth-Anything
Depth-guided sliding window for geometric realism in placement
🔎 Similar Papers
No similar papers found.