Enrich the content of the image Using Context-Aware Copy Paste

📅 2024-07-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing copy-paste data augmentation methods neglect semantic contextual relationships between source and target images, leading to artifacts such as category mismatches and spatial incoherence in fused objects. To address this, we propose a context-aware copy-paste augmentation framework centered on a Bidirectional Latent Information Propagation (BLIP) mechanism. BLIP implicitly models cross-image latent dependencies between source and target images, enabling annotation-free content–category alignment. The framework integrates SAM for precise foreground segmentation and YOLO for robust object localization, with BLIP orchestrating bidirectional semantic guidance during fusion. Evaluated on multiple vision benchmarks, our method significantly improves the realism and diversity of synthesized images. It consistently boosts downstream performance across classification, detection, and segmentation tasks—achieving average gains of 2.1–4.7 percentage points—demonstrating strong generalization and augmentation efficacy.

Technology Category

Application Category

📝 Abstract
Data augmentation remains a widely utilized technique in deep learning, particularly in tasks such as image classification, semantic segmentation, and object detection. Among them, Copy-Paste is a simple yet effective method and gain great attention recently. However, existing Copy-Paste often overlook contextual relevance between source and target images, resulting in inconsistencies in generated outputs. To address this challenge, we propose a context-aware approach that integrates Bidirectional Latent Information Propagation (BLIP) for content extraction from source images. By matching extracted content information with category information, our method ensures cohesive integration of target objects using Segment Anything Model (SAM) and You Only Look Once (YOLO). This approach eliminates the need for manual annotation, offering an automated and user-friendly solution. Experimental evaluations across diverse datasets demonstrate the effectiveness of our method in enhancing data diversity and generating high-quality pseudo-images across various computer vision tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhance image content with context-aware Copy-Paste augmentation
Address contextual relevance gaps in existing Copy-Paste methods
Automate object integration using BLIP, SAM, and YOLO
Innovation

Methods, ideas, or system contributions that make the work stand out.

Context-Aware Copy Paste for image enrichment
BLIP and SAM for automated content integration
YOLO ensures cohesive object matching
🔎 Similar Papers
No similar papers found.