🤖 AI Summary
Existing multi-objective evolutionary algorithms (MOEAs) exhibit weak generalization and poor cross-problem transferability when tackling large-scale, multi-objective, computationally expensive problems (MOPs). Method: We propose the Pre-Evolutionary Model (PEM), the first MOEA pre-evolution paradigm, which meta-trains a Transformer architecture on a large-scale, heterogeneous MOP corpus to learn universal population evolutionary priors. PEM introduces dimension-aware embedding and objective encoding to jointly represent decision and objective spaces, enabling lightweight fine-evolution and online model adaptation for unseen problems. Contribution/Results: Across diverse challenging benchmarks, PEM achieves state-of-the-art performance in convergence, diversity, and computational efficiency. It significantly enhances cross-problem generalization capability and practical deployability, offering a scalable, adaptive framework for real-world complex MOPs.
📝 Abstract
Multi-objective optimization problems (MOPs) necessitate the simultaneous optimization of multiple objectives. Numerous studies have demonstrated that evolutionary computation is a promising paradigm for solving complex MOPs, which involve optimization problems with large-scale decision variables, many objectives, and expensive evaluation functions. However, existing multi-objective evolutionary algorithms (MOEAs) encounter significant challenges in generating high-quality populations when solving diverse complex MOPs. Specifically, the distinct requirements and constraints of the population result in the inefficiency or even incompetence of MOEAs in addressing various complex MOPs. Therefore, this paper proposes the concept of pre-evolving for MOEAs to generate high-quality populations for diverse complex MOPs. Drawing inspiration from the classical transformer architecture, we devise dimension embedding and objective encoding techniques to configure the pre-evolved model (PEM). The PEM is pre-evolved on a substantial number of existing MOPs. Subsequently, when fine-evolving on new complex MOPs, the PEM transforms the population into the next generation to approximate the Pareto-optimal front. Furthermore, it utilizes evaluations on new solutions to iteratively update the PEM for subsequent generations, thereby efficiently solving various complex MOPs. Experimental results demonstrate that the PEM outperforms state-of-the-art MOEAs on a range of complex MOPs.