🤖 AI Summary
Blind aberration correction suffers from poor generalization due to two key bottlenecks: limited scalability of training data and absence of optical degradation priors. To address these, we propose: (1) AODLibpro—a large-scale, uniformly diverse lens dataset—enhancing data scalability and distribution-guided learning; and (2) Latent PSF Representation (LPR), an implicit point spread function (PSF) modeling framework built upon VQVAE that encodes structured optical priors in latent space, enabling interpretable, compact, and physically grounded degradation constraints. Experiments demonstrate state-of-the-art blind correction performance on both synthetic and real-world lens data. Our method significantly improves generalization to unseen lens aberrations and substantially increases the efficiency of prior utilization, outperforming existing approaches in robustness and interpretability.
📝 Abstract
Emerging deep-learning-based lens library pre-training (LensLib-PT) pipeline offers a new avenue for blind lens aberration correction by training a universal neural network, demonstrating strong capability in handling diverse unknown optical degradations. This work proposes the OmniLens++ framework, which resolves two challenges that hinder the generalization ability of existing pipelines: the difficulty of scaling data and the absence of prior guidance characterizing optical degradation. To improve data scalability, we expand the design specifications to increase the degradation diversity of the lens source, and we sample a more uniform distribution by quantifying the spatial-variation patterns and severity of optical degradation. In terms of model design, to leverage the Point Spread Functions (PSFs), which intuitively describe optical degradation, as guidance in a blind paradigm, we propose the Latent PSF Representation (LPR). The VQVAE framework is introduced to learn latent features of LensLib's PSFs, which is assisted by modeling the optical degradation process to constrain the learning of degradation priors. Experiments on diverse aberrations of real-world lenses and synthetic LensLib show that OmniLens++ exhibits state-of-the-art generalization capacity in blind aberration correction. Beyond performance, the AODLibpro is verified as a scalable foundation for more effective training across diverse aberrations, and LPR can further tap the potential of large-scale LensLib. The source code and datasets will be made publicly available at https://github.com/zju-jiangqi/OmniLens2.