🤖 AI Summary
In lensless imaging, paired supervision introduces bias due to domain mismatch, while generic diffusion priors suffer severe performance degradation under highly noisy, heavily reused, and ill-posed deconvolution. This paper proposes Null-Space Diffusion Distillation (NSDD): the first method to distill the null-space component of iterative solvers in a single step, decoupling value-space constraints (ensuring measurement consistency) from null-space diffusion prior updates (enhancing reconstruction fidelity), enabling efficient, unpaired, and real-data-free reconstruction. Leveraging diffusion-prior-driven unsupervised learning, null-space distillation, value-space anchoring, and knowledge transfer from DDNM+, NSDD achieves the second-fastest inference speed on Lensless-FFHQ and PhlatCam, with perceptual quality approaching that of the teacher model and LPIPS scores significantly surpassing both DPS and classical convex optimization methods.
📝 Abstract
State-of-the-art photorealistic reconstructions for lensless cameras often rely on paired lensless-lensed supervision, which can bias models due to lens-lensless domain mismatch. To avoid this, ground-truth-free diffusion priors are attractive; however, generic formulations tuned for conventional inverse problems often break under the noisy, highly multiplexed, and ill-posed lensless deconvolution setting. We observe that methods which separate range-space enforcement from null-space diffusion-prior updates yield stable, realistic reconstructions. Building on this, we introduce Null-Space Diffusion Distillation (NSDD): a single-pass student that distills the null-space component of an iterative DDNM+ solver, conditioned on the lensless measurement and on a range-space anchor. NSDD preserves measurement consistency and achieves photorealistic results without paired supervision at a fraction of the runtime and memory. On Lensless-FFHQ and PhlatCam, NSDD is the second fastest, behind Wiener, and achieves near-teacher perceptual quality (second-best LPIPS, below DDNM+), outperforming DPS and classical convex baselines. These results suggest a practical path toward fast, ground-truth-free, photorealistic lensless imaging.