🤖 AI Summary
Fixed depth of field (DoF) in captured images precludes post-capture focus adjustment, making it impossible to correct defocus artifacts such as out-of-focus subjects. This paper proposes a diffusion Transformer-based image refocusing method enabling arbitrary specification of new focal planes and blur magnitudes for DoF editing. Our approach introduces a novel stacked constraint mechanism that jointly incorporates camera imaging physics and scene geometric priors, ensuring both structural consistency and optical plausibility in refocused outputs. We design a synthetic data generation pipeline pairing multi-focus images with corresponding bokeh levels, and evaluate our method on a newly constructed multi-scene benchmark. Results demonstrate superior robustness and accuracy—significantly outperforming state-of-the-art learning-based and traditional methods—in diverse real-world scenarios. This work establishes a new paradigm for controllable DoF manipulation in computational photography and generative AI.
📝 Abstract
The depth-of-field (DoF) effect, which introduces aesthetically pleasing blur, enhances photographic quality but is fixed and difficult to modify once the image has been created. This becomes problematic when the applied blur is undesirable~(e.g., the subject is out of focus). To address this, we propose DiffCamera, a model that enables flexible refocusing of a created image conditioned on an arbitrary new focus point and a blur level. Specifically, we design a diffusion transformer framework for refocusing learning. However, the training requires pairs of data with different focus planes and bokeh levels in the same scene, which are hard to acquire. To overcome this limitation, we develop a simulation-based pipeline to generate large-scale image pairs with varying focus planes and bokeh levels. With the simulated data, we find that training with only a vanilla diffusion objective often leads to incorrect DoF behaviors due to the complexity of the task. This requires a stronger constraint during training. Inspired by the photographic principle that photos of different focus planes can be linearly blended into a multi-focus image, we propose a stacking constraint during training to enforce precise DoF manipulation. This constraint enhances model training by imposing physically grounded refocusing behavior that the focusing results should be faithfully aligned with the scene structure and the camera conditions so that they can be combined into the correct multi-focus image. We also construct a benchmark to evaluate the effectiveness of our refocusing model. Extensive experiments demonstrate that DiffCamera supports stable refocusing across a wide range of scenes, providing unprecedented control over DoF adjustments for photography and generative AI applications.