🤖 AI Summary
For high-dimensional non-convex optimization in complex, noisy, and dynamic environments, this paper proposes Graph Neural Evolution (GNE), the first framework establishing a theoretical duality between Graph Neural Networks (GNNs) and Evolutionary Algorithms (EAs). Methodologically, individuals in the population are modeled as graph nodes, and a spectral-domain filter jointly aggregates high-frequency signals (to preserve diversity) and low-frequency signals (to enhance stability), enabling interpretable and tunable balance between global exploration and local exploitation. Key contributions include: (1) the first EA integrating graph-structured representations with spectral filtering; (2) bidirectional modeling insights bridging GNNs and EAs; and (3) state-of-the-art performance—surpassing GA, DE, and CMA-ES—in robustness, convergence speed, and solution quality across standard benchmarks, hyperparameter optimization, and neural architecture search tasks.
📝 Abstract
In this paper, we reveal the intrinsic duality between graph neural networks (GNNs) and evolutionary algorithms (EAs), bridging two traditionally distinct fields. Building on this insight, we propose Graph Neural Evolution (GNE), a novel evolutionary algorithm that models individuals as nodes in a graph and leverages designed frequency-domain filters to balance global exploration and local exploitation. Through the use of these filters, GNE aggregates high-frequency (diversity-enhancing) and low-frequency (stability-promoting) information, transforming EAs into interpretable and tunable mechanisms in the frequency domain. Extensive experiments on benchmark functions demonstrate that GNE consistently outperforms state-of-the-art algorithms such as GA, DE, CMA-ES, SDAES, and RL-SHADE, excelling in complex landscapes, optimal solution shifts, and noisy environments. Its robustness, adaptability, and superior convergence highlight its practical and theoretical value. Beyond optimization, GNE establishes a conceptual and mathematical foundation linking EAs and GNNs, offering new perspectives for both fields. Its framework encourages the development of task-adaptive filters and hybrid approaches for EAs, while its insights can inspire advances in GNNs, such as improved global information propagation and mitigation of oversmoothing. GNE's versatility extends to solving challenges in machine learning, including hyperparameter tuning and neural architecture search, as well as real-world applications in engineering and operations research. By uniting the dynamics of EAs with the structural insights of GNNs, this work provides a foundation for interdisciplinary innovation, paving the way for scalable and interpretable solutions to complex optimization problems.