🤖 AI Summary
This work addresses the highly complex optimization problem of recovering depth from multiple defocused images by proposing an efficient global optimization method based on alternating minimization. Specifically, when the depth map is fixed, the all-in-focus image is optimized via a linear formulation; conversely, with the all-in-focus image held constant, the depth at each pixel is solved independently and in parallel. This approach is the first to enable efficient global optimization for high-resolution depth estimation without relying on deep learning models. By integrating convex optimization with parallel grid search and directly leveraging the physical optics-based forward model, the method achieves significant performance gains over existing techniques on both synthetic and real defocus datasets, supporting accurate reconstruction of high-resolution depth maps.
📝 Abstract
Though there exists a reasonable forward model for blur based on optical physics, recovering depth from a collection of defocused images remains a computationally challenging optimization problem. In this paper, we show that with contemporary optimization methods and reasonable computing resources, a global optimization approach to depth from defocus is feasible. Our approach rests on alternating minimization. When holding the depth map fixed, the forward model is linear with respect to the all-in-focus image. When holding the all-in-focus image fixed, the depth at each pixel can be computed independently, enabling embarrassingly parallel computation. We show that alternating between convex optimization and parallel grid search can effectively solve the depth-from-defocus problem at higher resolutions than current deep learning methods. We demonstrate our approach on benchmark datasets with synthetic and real defocus blur and show promising results compared to prior approaches. Our code is available at github.com/hollyjackson/dfd.