Interpretable Operator Learning for Inverse Problems via Adaptive Spectral Filtering: Convergence and Discretization Invariance

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of reconstruction in ill-posed inverse problems caused by noise, as well as limitations of conventional regularization methods—such as reliance on manual hyperparameter tuning—and the lack of interpretability and cross-resolution generalization in deep learning approaches. To this end, we propose the Spectral Correction Network (SC-Net), which learns a signal-to-noise-ratio-adaptive, pointwise filtering function in the spectral domain of the forward operator to reweight spectral coefficients, yielding stable and interpretable solutions. Our method uniquely integrates interpretable adaptive spectral filtering with operator learning, and we theoretically prove that it can approximate continuous inverse operators, possesses discretization invariance, and achieves minimax optimal convergence rates. Experiments on 1D integral equations demonstrate a convergence rate of $O(\delta^{0.5})$, matching theoretical optimality; the learned filter outperforms Oracle Tikhonov regularization and generalizes zero-shot from $N=256$ to $N=2048$ with reconstruction error maintained at approximately 0.23.

Technology Category

Application Category

📝 Abstract
Solving ill-posed inverse problems necessitates effective regularization strategies to stabilize the inversion process against measurement noise. While classical methods like Tikhonov regularization require heuristic parameter tuning, and standard deep learning approaches often lack interpretability and generalization across resolutions, we propose SC-Net (Spectral Correction Network), a novel operator learning framework. SC-Net operates in the spectral domain of the forward operator, learning a pointwise adaptive filter function that reweights spectral coefficients based on the signal-to-noise ratio. We provide a theoretical analysis showing that SC-Net approximates the continuous inverse operator, guaranteeing discretization invariance. Numerical experiments on 1D integral equations demonstrate that SC-Net: (1) achieves the theoretical minimax optimal convergence rate ($O(δ^{0.5})$ for $s=p=1.5$), matching theoretical lower bounds; (2) learns interpretable sharp-cutoff filters that outperform Oracle Tikhonov regularization; and (3) exhibits zero-shot super-resolution, maintaining stable reconstruction errors ($\approx 0.23$) when trained on coarse grids ($N=256$) and tested on significantly finer grids (up to $N=2048$). The proposed method bridges the gap between rigorous regularization theory and data-driven operator learning.
Problem

Research questions and friction points this paper is trying to address.

inverse problems
regularization
discretization invariance
interpretability
ill-posedness
Innovation

Methods, ideas, or system contributions that make the work stand out.

operator learning
spectral filtering
discretization invariance
inverse problems
interpretable deep learning
🔎 Similar Papers
No similar papers found.
Hang-Cheng Dong
Hang-Cheng Dong
PhD Student, Harbin Institute of Technology
P
Pengcheng Cheng
School of Mathematics, Jilin University, Changchun, 130012, China.
S
Shuhuan Li
School of Mathematics, Inner Mongolia University, Hohhot, 010021, China.