EraseLoRA: MLLM-Driven Foreground Exclusion and Background Subtype Aggregation for Dataset-Free Object Removal

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Object removal requires high-fidelity reconstruction of occluded background regions without regenerating the target object; however, existing data-free methods suffer from erroneous foreground-background confusion due to self-attention redirection—mistaking distractor foregrounds as background—and consequent loss of detail coherence. This paper proposes the first training-free object removal framework: it leverages a multimodal large language model (MLLM) for single-image mask parsing to precisely discriminate target foreground, distractor foreground, and clean background. We introduce a background-aware foreground exclusion mechanism and a subtype-aggregated reconstruction strategy, eliminating destructive attention operations. Further, we incorporate test-time adaptive optimization and a fine-grained–global consistency alignment objective. The method is plug-and-play, significantly outperforming prior data-free approaches across multiple benchmarks while matching supervised methods in both local texture fidelity and global structural integrity.

Technology Category

Application Category

📝 Abstract
Object removal differs from common inpainting, since it must prevent the masked target from reappearing and reconstruct the occluded background with structural and contextual fidelity, rather than merely filling a hole plausibly. Recent dataset-free approaches that redirect self-attention inside the mask fail in two ways: non-target foregrounds are often misinterpreted as background, which regenerates unwanted objects, and direct attention manipulation disrupts fine details and hinders coherent integration of background cues. We propose EraseLoRA, a novel dataset-free framework that replaces attention surgery with background-aware reasoning and test-time adaptation. First, Background-aware Foreground Exclusion (BFE), uses a multimodal large-language models to separate target foreground, non-target foregrounds, and clean background from a single image-mask pair without paired supervision, producing reliable background cues while excluding distractors. Second, Background-aware Reconstruction with Subtype Aggregation (BRSA), performs test-time optimization that treats inferred background subtypes as complementary pieces and enforces their consistent integration through reconstruction and alignment objectives, preserving local detail and global structure without explicit attention intervention. We validate EraseLoRA as a plug-in to pretrained diffusion models and across benchmarks for object removal, demonstrating consistent improvements over dataset-free baselines and competitive results against dataset-driven methods. The code will be made available upon publication.
Problem

Research questions and friction points this paper is trying to address.

Prevents target object reappearance in removal tasks
Separates foreground and background without paired supervision
Preserves details and structure without attention manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal LLM separates foreground and background without supervision
Test-time optimization integrates background subtypes for coherent reconstruction
Plug-in framework enhances diffusion models without dataset dependency
🔎 Similar Papers
No similar papers found.
Sanghyun Jo
Sanghyun Jo
OGQ · SNU AIBL Lab
Weakly-supervised SegmentationData-efficient LearningGenerative AI
D
Donghwan Lee
Department of Biomedical Sciences, Seoul National University, Seoul, Korea
E
Eunji Jung
Department of Biomedical Sciences, Seoul National University, Seoul, Korea
S
Seong Je Oh
Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
K
Kyungsu Kim
School of Transdisciplinary Innovations, Seoul National University, Seoul, Korea