π€ AI Summary
Traditional evolutionary algorithms (EAs) struggle to effectively leverage historical knowledge or adapt to dynamically expanding knowledge bases, limiting their generalization and continual optimization capability. To address this, we propose the Online Knowledge-Augmented Evolutionary Model (OKAEM)βthe first EA framework to incorporate attention mechanisms into evolutionary operator modeling, synergistically integrating meta-learning and knowledge transfer for online parameterization and real-time adaptation of selection, crossover, and mutation operators. OKAEM enables continual self-adaptive evolution under dynamic knowledge base growth, biologically inspired by natural selection and genetic recombination. Experiments demonstrate that OKAEM significantly outperforms baselines across diverse knowledge-transfer scenarios; achieves state-of-the-art (SOTA) performance without prior knowledge; surpasses mainstream black-box methods in vision-language model tuning; and exhibits monotonically improving optimization performance as knowledge accumulates.
π Abstract
Evolutionary algorithms (EAs) maintain populations through evolutionary operators to discover diverse solutions for complex tasks while gathering valuable knowledge, such as historical population data and fitness evaluations. However, traditional EAs face challenges in dynamically adapting to expanding knowledge bases, hindering the efficient exploitation of accumulated information and limiting adaptability to new situations. To address these issues, we introduce an Optimization Knowledge Adaptation Evolutionary Model (OKAEM), which features dynamic parameter adjustment using accumulated knowledge to enhance its optimization capabilities. OKAEM employs attention mechanisms to model the interactions among individuals, fitness landscapes, and genetic components separately, thereby parameterizing the evolutionary operators of selection, crossover, and mutation. These powerful learnable operators enable OKAEM to benefit from pre-learned extensive prior knowledge and self-tune with real-time evolutionary insights. Experimental results demonstrate that OKAEM: 1) exploits prior knowledge for significant performance gains across various knowledge transfer settings; 2) achieves competitive performance through self-tuning alone, even without prior knowledge; 3) outperforms state-of-the-art black-box baselines in a vision-language model tuning case; 4) can improve its optimization capabilities with growing knowledge; 5) is capable of emulating principles of natural selection and genetic recombination.