🤖 AI Summary
This paper introduces AnyInsertion, addressing the general reference-image-driven image insertion problem: seamlessly compositing humans, objects, or garments into a target scene under user-specified mask or text guidance, while preserving identity fidelity, fine details, and adapting local style, color, and texture. Methodologically, it proposes the first in-context editing framework based on Diffusion Transformers (DiT), leveraging the reference image as contextual input. A dual-prompt strategy jointly optimizes identity consistency and scene coherence, augmented by mask-text joint guidance and context-aware prompting. The model achieves multi-task generalization with a single training phase. Evaluated on the newly curated AnyInsertion dataset (120K image pairs) and established benchmarks (DreamBooth, VTON-HD), it outperforms prior methods across virtual try-on, creative generation, and scene composition tasks, demonstrating significant improvements in both qualitative realism and quantitative metrics.
📝 Abstract
This work presents Insert Anything, a unified framework for reference-based image insertion that seamlessly integrates objects from reference images into target scenes under flexible, user-specified control guidance. Instead of training separate models for individual tasks, our approach is trained once on our new AnyInsertion dataset--comprising 120K prompt-image pairs covering diverse tasks such as person, object, and garment insertion--and effortlessly generalizes to a wide range of insertion scenarios. Such a challenging setting requires capturing both identity features and fine-grained details, while allowing versatile local adaptations in style, color, and texture. To this end, we propose to leverage the multimodal attention of the Diffusion Transformer (DiT) to support both mask- and text-guided editing. Furthermore, we introduce an in-context editing mechanism that treats the reference image as contextual information, employing two prompting strategies to harmonize the inserted elements with the target scene while faithfully preserving their distinctive features. Extensive experiments on AnyInsertion, DreamBooth, and VTON-HD benchmarks demonstrate that our method consistently outperforms existing alternatives, underscoring its great potential in real-world applications such as creative content generation, virtual try-on, and scene composition.