🤖 AI Summary
Neural quantum states (NQS) suffer from high computational cost and training instability when simulating ground states of one- and two-dimensional quantum many-body systems. To address this, we propose an adaptive NQS optimization framework based on recurrent neural networks (RNNs). Our core innovation is the first integration of adaptive strategy into variational NQS optimization: an RNN quantum state trained on a small-scale system provides parameter initialization for efficiently training larger-scale networks. This approach significantly reduces training fluctuations, enhances parameter reuse, and improves computational resource efficiency. Coupled with variational Monte Carlo and GPU acceleration, our method achieves higher-accuracy ground-state energy estimates on prototypical models—including the Heisenberg and Hubbard models—using fewer computational resources. The framework thus advances accuracy, training stability, and scalability in NQS-based quantum many-body simulations.
📝 Abstract
Neural-network quantum states (NQS) are powerful neural-network ansätzes that have emerged as promising tools for studying quantum many-body physics through the lens of the variational principle. These architectures are known to be systematically improvable by increasing the number of parameters. Here we demonstrate an Adaptive scheme to optimize NQSs, through the example of recurrent neural networks (RNN), using a fraction of the computation cost while reducing training fluctuations and improving the quality of variational calculations targeting ground states of prototypical models in one- and two-spatial dimensions. This Adaptive technique reduces the computational cost through training small RNNs and reusing them to initialize larger RNNs. This work opens up the possibility for optimizing graphical processing unit (GPU) resources deployed in large-scale NQS simulations.