🤖 AI Summary
Neural quantum states (NQS) face prohibitive computational costs and poor generalization across parameter regimes when applied to strongly correlated electron systems. Method: We propose a physics-informed transferable curriculum learning framework that integrates perturbation theory with curriculum learning to embed prior physical knowledge; we further design Pairing-Net, a novel graph neural network architecture that overcomes key limitations of conventional NQS in generalizability and scalability. Contributions/Results: Our framework achieves ~200× speedup in computation while significantly enhancing numerical stability. It enables efficient and robust exploration of many-body phase space on large lattices (≥32 sites) and across broad parameter ranges (U/t ∈ [0, 12]). This work establishes a new paradigm for automated phase diagram discovery in strongly correlated systems.
📝 Abstract
Recent advances in neural network quantum states (NQS) have enabled high-accuracy predictions for complex quantum many-body systems such as strongly correlated electron systems. However, the computational cost remains prohibitive, making exploration of the diverse parameters of interaction strengths and other physical parameters inefficient. While transfer learning has been proposed to mitigate this challenge, achieving generalization to large-scale systems and diverse parameter regimes remains difficult. To address this limitation, we propose a novel curriculum learning framework based on transfer learning for NQS. This facilitates efficient and stable exploration across a vast parameter space of quantum many-body systems. In addition, by interpreting NQS transfer learning through a perturbative lens, we demonstrate how prior physical knowledge can be flexibly incorporated into the curriculum learning process. We also propose Pairing-Net, an architecture to practically implement this strategy for strongly correlated electron systems, and empirically verify its effectiveness. Our results show an approximately 200-fold speedup in computation and a marked improvement in optimization stability compared to conventional methods.