🤖 AI Summary
This work addresses continuous multi-objective optimization (CMOP), aiming to improve the distributional approximation accuracy of the Pareto front approximation set relative to a reference set. We propose using the Maximum Mean Discrepancy (MMD) to quantify the distributional distance between these two sets—marking the first integration of MMD as an objective in multi-objective optimization. We derive closed-form expressions for the gradient and Hessian of MMD with respect to decision variables, enabling a second-order, set-oriented Newton method (MMDN). Furthermore, we design a hybrid warm-start framework combining MMDN with a multi-objective evolutionary algorithm (MOEA). Evaluated on 11 standard benchmark problems under identical computational budgets, the hybrid approach significantly outperforms pure MOEAs, yielding substantial improvements in both distribution quality and convergence accuracy of the Pareto front. Key contributions include: (i) MMD-based distributional modeling of solution sets; (ii) a differentiable, second-order optimizer tailored for set-oriented optimization; and (iii) a synergistic paradigm integrating evolutionary and deterministic optimization strategies.
📝 Abstract
Maximum mean discrepancy (MMD) has been widely employed to measure the distance between probability distributions. In this paper, we propose using MMD to solve continuous multi-objective optimization problems (MOPs). For solving MOPs, a common approach is to minimize the distance (e.g., Hausdorff) between a finite approximate set of the Pareto front and a reference set. Viewing these two sets as empirical measures, we propose using MMD to measure the distance between them. To minimize the MMD value, we provide the analytical expression of its gradient and Hessian matrix w.r.t. the search variables, and use them to devise a novel set-oriented, MMD-based Newton (MMDN) method. Also, we analyze the theoretical properties of MMD's gradient and Hessian, including the first-order stationary condition and the eigenspectrum of the Hessian, which are important for verifying the correctness of MMDN. To solve complicated problems, we propose hybridizing MMDN with multiobjective evolutionary algorithms (MOEAs), where we first execute an EA for several iterations to get close to the global Pareto front and then warm-start MMDN with the result of the MOEA to efficiently refine the approximation. We empirically test the hybrid algorithm on 11 widely used benchmark problems, and the results show the hybrid (MMDN + MOEA) can achieve a much better optimization accuracy than EA alone with the same computation budget.