🤖 AI Summary
This work addresses the challenges of semantic coherence and spatial plausibility between foreground objects and background scenes in general-purpose object-scene compositing. To this end, we propose the first Affordance-driven image synthesis framework. Methodologically, we introduce Affordance modeling—previously unexplored in general image compositing—into this task; construct SAM-FB, a large-scale dataset comprising 3 million samples; and design Mask-Aware Dual Diffusion (MADD), a dual-stream diffusion architecture that explicitly incorporates insertion masks and enables Affordance-guided alignment of object position and scene semantics. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods across diverse object categories and cross-domain real-world images, exhibiting strong generalization capability. Both source code and the SAM-FB dataset are publicly released.
📝 Abstract
As a common image editing operation, image composition involves integrating foreground objects into background scenes. In this paper, we expand the application of the concept of Affordance from human-centered image composition tasks to a more general object-scene composition framework, addressing the complex interplay between foreground objects and background scenes. Following the principle of Affordance, we define the affordance-aware object insertion task, which aims to seamlessly insert any object into any scene with various position prompts. To address the limited data issue and incorporate this task, we constructed the SAM-FB dataset, which contains over 3 million examples across more than 3,000 object categories. Furthermore, we propose the Mask-Aware Dual Diffusion (MADD) model, which utilizes a dual-stream architecture to simultaneously denoise the RGB image and the insertion mask. By explicitly modeling the insertion mask in the diffusion process, MADD effectively facilitates the notion of affordance. Extensive experimental results show that our method outperforms the state-of-the-art methods and exhibits strong generalization performance on in-the-wild images. Please refer to our code on https://github.com/KaKituken/affordance-aware-any.