🤖 AI Summary
To address the low convergence efficiency and insufficient diversity of Pareto-optimal solutions in multi-objective optimization (MOO), this paper proposes SPREAD, a generative framework based on Denoising Diffusion Probabilistic Models (DDPMs). SPREAD learns a conditional diffusion process in the decision space; during reverse sampling, it incorporates adaptive multi-gradient descent to accelerate convergence and introduces a Gaussian radial basis function (RBF)-based repulsion term to enhance solution distribution uniformity. The framework supports both offline optimization and Bayesian surrogate-assisted scenarios, offering scalability and robustness. Empirical evaluation across multiple benchmark problems demonstrates that SPREAD consistently outperforms state-of-the-art methods in convergence speed, Pareto front coverage, and capability on large-scale or expensive black-box problems. To our knowledge, this is the first work to systematically integrate diffusion models into MOO, establishing a novel paradigm for efficient and high-quality Pareto set generation.
📝 Abstract
Developing efficient multi-objective optimization methods to compute the Pareto set of optimal compromises between conflicting objectives remains a key challenge, especially for large-scale and expensive problems. To bridge this gap, we introduce SPREAD, a generative framework based on Denoising Diffusion Probabilistic Models (DDPMs). SPREAD first learns a conditional diffusion process over points sampled from the decision space and then, at each reverse diffusion step, refines candidates via a sampling scheme that uses an adaptive multiple gradient descent-inspired update for fast convergence alongside a Gaussian RBF-based repulsion term for diversity. Empirical results on multi-objective optimization benchmarks, including offline and Bayesian surrogate-based settings, show that SPREAD matches or exceeds leading baselines in efficiency, scalability, and Pareto front coverage.