🤖 AI Summary
Quantum simulation on classical hardware faces severe computational and memory scalability bottlenecks, hindering faithful emulation beyond ~100 qubits. To address this, we systematically analyze the simulation stack and propose, for the first time, a “multi-level modeling + cross-stack optimization” co-design framework that enables approximate quantum simulation on heterogeneous accelerators (GPUs and FPGAs). Our methodology integrates tensor network compression, sparse state representations, low-rank approximations, mixed-precision arithmetic, and hardware-aware scheduling—rigorously characterizing complexity trade-offs and applicability domains of mainstream algorithms. Experimental evaluation demonstrates substantial improvements in simulation throughput and memory efficiency: up to 3.2× speedup and 4.7× memory reduction versus state-of-the-art baselines. This work delivers the first systematic acceleration guide and reusable engineering framework for classical simulation of >100-qubit quantum circuits.
📝 Abstract
Quantum computing has the potential to revolutionize multiple fields by solving complex problems that can not be solved in reasonable time with current classical computers. Nevertheless, the development of quantum computers is still in its early stages and the available systems have still very limited resources. As such, currently, the most practical way to develop and test quantum algorithms is to use classical simulators of quantum computers. In addition, the development of new quantum computers and their components also depends on simulations. Given the characteristics of a quantum computer, their simulation is a very demanding application in terms of both computation and memory. As such, simulations do not scale well in current classical systems. Thus different optimization and approximation techniques need to be applied at different levels. This review provides an overview of the components of a quantum computer, the levels at which these components and the whole quantum computer can be simulated, and an in-depth analysis of different state-of-the-art acceleration approaches. Besides the optimizations that can be performed at the algorithmic level, this review presents the most promising hardware-aware optimizations and future directions that can be explored for improving the performance and scalability of the simulations.