🤖 AI Summary
Operator learning for inverse problems suffers from poor generalization across discretizations and high retraining costs. Method: This paper proposes C2BNet, a differentiable operator learning framework based on a coefficient-to-basis mapping architecture. It enables zero-shot adaptation to multi-scale discretizations without retraining and introduces a fine-grained tuning mechanism that achieves cross-grid and cross-resolution transfer by updating only a small fraction of parameters. Contributions/Results: Theoretically, we prove that C2BNet automatically captures the low-dimensional manifold structure of the solution space and derive rigorous upper bounds on both approximation error and generalization error. Empirically, on multiple scientific computing inverse problems, C2BNet achieves full-training accuracy with less than 5% parameter updates, reducing computational cost by one to two orders of magnitude. Crucially, the theoretical bounds closely align with observed performance.
📝 Abstract
We propose a Coefficient-to-Basis Network (C2BNet), a novel framework for solving inverse problems within the operator learning paradigm. C2BNet efficiently adapts to different discretizations through fine-tuning, using a pre-trained model to significantly reduce computational cost while maintaining high accuracy. Unlike traditional approaches that require retraining from scratch for new discretizations, our method enables seamless adaptation without sacrificing predictive performance. Furthermore, we establish theoretical approximation and generalization error bounds for C2BNet by exploiting low-dimensional structures in the underlying datasets. Our analysis demonstrates that C2BNet adapts to low-dimensional structures without relying on explicit encoding mechanisms, highlighting its robustness and efficiency. To validate our theoretical findings, we conducted extensive numerical experiments that showcase the superior performance of C2BNet on several inverse problems. The results confirm that C2BNet effectively balances computational efficiency and accuracy, making it a promising tool to solve inverse problems in scientific computing and engineering applications.