🤖 AI Summary
Existing single-image relighting methods struggle to achieve explicit, disentangled control over lighting parameters—or rely on multi-view inputs and geometric reconstruction. To address this, we propose the first single-image relighting method that embeds the linear photometric properties of light into a diffusion model fine-tuning paradigm. Our approach enables independent, fine-grained adjustment of target light intensity, correlated color temperature, and ambient illumination. Built upon lightweight fine-tuning of Stable Diffusion, it introduces a physics-inspired RGB-scale chromaticity–intensity perturbation mechanism and employs a paired distillation training strategy jointly driven by real image pairs and large-scale synthetic renderings. On single-image relighting, our method significantly outperforms state-of-the-art approaches: user studies show a 37% increase in preference rate, while maintaining real-time interactivity, high photometric consistency, and superior texture fidelity.
📝 Abstract
We present a simple, yet effective diffusion-based method for fine-grained, parametric control over light sources in an image. Existing relighting methods either rely on multiple input views to perform inverse rendering at inference time, or fail to provide explicit control over light changes. Our method fine-tunes a diffusion model on a small set of real raw photograph pairs, supplemented by synthetically rendered images at scale, to elicit its photorealistic prior for relighting. We leverage the linearity of light to synthesize image pairs depicting controlled light changes of either a target light source or ambient illumination. Using this data and an appropriate fine-tuning scheme, we train a model for precise illumination changes with explicit control over light intensity and color. Lastly, we show how our method can achieve compelling light editing results, and outperforms existing methods based on user preference.