🤖 AI Summary
Scientific machine learning faces a fundamental verification challenge: statistical methods rely on strong assumptions, while classical interpolation—though possessing rigorous error bounds—is computationally intractable. This work introduces the first verifiable modeling framework that integrates adaptive interpolation with tight, computationally feasible error upper bounds, enabling pointwise mathematically provable falsifiability of predictions. Our approach innovatively couples radial basis function interpolation, interval analysis, Lipschitz constant estimation, and uncertainty propagation modeling to formulate an error-aware training paradigm, augmented by a constraint-aware loss function. Evaluated on partial differential equation surrogate modeling tasks, the framework achieves a 99.2% error-bound coverage rate and reduces verification latency by three orders of magnitude. These advances significantly enhance trustworthiness and enable reliable closed-loop decision-making in scientific simulation workflows.