🤖 AI Summary
This work addresses inverse problems under single-noisy-observation settings. We propose KAN-PnP, the first framework integrating Kolmogorov–Arnold Networks (KANs) into Plug-and-Play (PnP) optimization as a single-image denoising prior—requiring no large-scale training data. Theoretically, we prove that the KAN-based denoiser satisfies Lipschitz continuity; under convex data-fidelity terms and bounded regularizers, this guarantees rigorous convergence of the PnP-ADMM algorithm. Empirically, KAN-PnP achieves state-of-the-art performance on single-image super-resolution and joint inversion tasks, outperforming existing single-image priors in both reconstruction accuracy and iteration efficiency. These results validate its strong convergence properties and robust denoising capability from a single noisy observation.
📝 Abstract
The use of Plug-and-Play (PnP) methods has become a central approach for solving inverse problems, with denoisers serving as regularising priors that guide optimisation towards a clean solution. In this work, we introduce KAN-PnP, an optimisation framework that incorporates Kolmogorov-Arnold Networks (KANs) as denoisers within the Plug-and-Play (PnP) paradigm. KAN-PnP is specifically designed to solve inverse problems with single-instance priors, where only a single noisy observation is available, eliminating the need for large datasets typically required by traditional denoising methods. We show that KANs, based on the Kolmogorov-Arnold representation theorem, serve effectively as priors in such settings, providing a robust approach to denoising. We prove that the KAN denoiser is Lipschitz continuous, ensuring stability and convergence in optimisation algorithms like PnP-ADMM, even in the context of single-shot learning. Additionally, we provide theoretical guarantees for KAN-PnP, demonstrating its convergence under key conditions: the convexity of the data fidelity term, Lipschitz continuity of the denoiser, and boundedness of the regularisation functional. These conditions are crucial for stable and reliable optimisation. Our experimental results show, on super-resolution and joint optimisation, that KAN-PnP outperforms exiting methods, delivering superior performance in single-shot learning with minimal data. The method exhibits strong convergence properties, achieving high accuracy with fewer iterations.