🤖 AI Summary
This work systematically compares genetic algorithms (GA), graph neural networks (GNNs), and the density matrix renormalization group (DMRG)—a tensor-network method—on the weighted Max-Cut problem across graph instances with 10–250 nodes. Under a unified benchmark, we conduct a horizontal evaluation of solution quality, runtime, and memory consumption. Results show that GA yields near-optimal solutions on small graphs but suffers from severe timeout issues; GNNs achieve efficiency and low memory usage on medium-sized instances but exhibit unstable generalization; DMRG consistently attains high approximation ratios (>0.98) on large-scale instances with low computational overhead and manageable memory footprint. The key contribution is the empirical demonstration of tensor-network methods as a competitive paradigm for combinatorial optimization—revealing their scalability, robustness, and resource efficiency—and providing concrete evidence supporting the practical deployment of quantum-inspired algorithms.
📝 Abstract
Combinatorial optimization is essential across numerous disciplines. Traditional metaheuristics excel at exploring complex solution spaces efficiently, yet they often struggle with scalability. Deep learning has become a viable alternative for quickly generating high-quality solutions, particularly when metaheuristics underperform. In recent years, quantum-inspired approaches such as tensor networks have shown promise in addressing these challenges. Despite these advancements, a thorough comparison of the different paradigms is missing. This study evaluates eight algorithms on Weighted Max-Cut graphs ranging from 10 to 250 nodes. Specifically, we compare a Genetic Algorithm representing metaheuristics, a Graph Neural Network for deep learning, and the Density Matrix Renormalization Group as a tensor network approach. Our analysis focuses on solution quality and computational efficiency (i.e., time and memory usage). Numerical results show that the Genetic Algorithm achieves near-optimal results for small graphs, although its computation time grows significantly with problem size. The Graph Neural Network offers a balanced solution for medium-sized instances with low memory demands and rapid inference, yet it exhibits more significant variability on larger graphs. Meanwhile, the Tensor Network approach consistently yields high approximation ratios and efficient execution on larger graphs, albeit with increased memory consumption.