🤖 AI Summary
This work addresses the learning of evolution operators for time-dependent Schrödinger equations. Departing from black-box neural network methods (e.g., FNO, DeepONet) that neglect fundamental quantum mechanical principles, we propose the first theory-driven operator learning framework jointly enforcing linearity and weak unitarity constraints. Our approach: (1) constructs an explicitly parameterized estimator satisfying both constraints; (2) derives rigorous, uniform prediction error bounds valid for all sufficiently smooth initial states; and (3) establishes the first quantifiable generalization bound for temporal extrapolation. Evaluated on realistic Hamiltonians—including hydrogen atom, ion-trap, and optical lattice systems—our method achieves relative errors two to three orders of magnitude lower than state-of-the-art alternatives, markedly enhancing physical consistency and out-of-distribution generalization reliability.
📝 Abstract
We consider the problem of learning the evolution operator for the time-dependent Schr""{o}dinger equation, where the Hamiltonian may vary with time. Existing neural network-based surrogates often ignore fundamental properties of the Schr""{o}dinger equation, such as linearity and unitarity, and lack theoretical guarantees on prediction error or time generalization. To address this, we introduce a linear estimator for the evolution operator that preserves a weak form of unitarity. We establish both upper and lower bounds on the prediction error that hold uniformly over all sufficiently smooth initial wave functions. Additionally, we derive time generalization bounds that quantify how the estimator extrapolates beyond the time points seen during training. Experiments across real-world Hamiltonians -- including hydrogen atoms, ion traps for qubit design, and optical lattices -- show that our estimator achieves relative errors $10^{-2}$ to $10^{-3}$ times smaller than state-of-the-art methods such as the Fourier Neural Operator and DeepONet.