๐ค AI Summary
This work addresses zero-shot, high-fidelity, and controllable material transfer for objects in imagesโwithout requiring paired data or model fine-tuning. We propose a conditional diffusion framework featuring three key innovations: (1) the first continuous material strength control mechanism; (2) joint optimization via material-semantic guidance, boundary-aware mask refinement, and background consistency constraints; and (3) scene-aware synthesis to jointly ensure material plausibility and robust background integration. Evaluated on a newly constructed real-world material transfer benchmark, our method achieves over 12% improvement in LPIPS and FID over state-of-the-art methods, significantly suppressing boundary artifacts while preserving object geometry and background coherence. A user study further confirms its superiority in both control precision and visual realism.
๐ Abstract
Manipulating the material appearance of objects in images is critical for applications like augmented reality, virtual prototyping, and digital content creation. We present MaterialFusion, a novel framework for high-quality material transfer that allows users to adjust the degree of material application, achieving an optimal balance between new material properties and the object's original features. MaterialFusion seamlessly integrates the modified object into the scene by maintaining background consistency and mitigating boundary artifacts. To thoroughly evaluate our approach, we have compiled a dataset of real-world material transfer examples and conducted complex comparative analyses. Through comprehensive quantitative evaluations and user studies, we demonstrate that MaterialFusion significantly outperforms existing methods in terms of quality, user control, and background preservation. Code is available at https://github.com/kzGarifullin/MaterialFusion.