🤖 AI Summary
Offline multi-objective optimization aims to identify and generate high-quality, uniformly distributed Pareto-optimal solutions from a fixed design–objective dataset. This paper proposes a preference-guided diffusion generative framework featuring a novel diversity-aware preference guidance mechanism: it models inter-solution ordinal relationships via Pareto dominance probability, integrates classifier-guided sampling with reverse-generation optimization, and incorporates diversity regularization to control target-space distribution. The proposed preference classifier exhibits strong generalization capability, enabling discovery of unseen Pareto-optimal solutions beyond the training set. Evaluated on multiple continuous offline benchmarks, our method significantly outperforms existing inverse and generative approaches in both Pareto front approximation accuracy and solution distribution quality, while matching the performance of state-of-the-art forward surrogate models.
📝 Abstract
Offline multi-objective optimization aims to identify Pareto-optimal solutions given a dataset of designs and their objective values. In this work, we propose a preference-guided diffusion model that generates Pareto-optimal designs by leveraging a classifier-based guidance mechanism. Our guidance classifier is a preference model trained to predict the probability that one design dominates another, directing the diffusion model toward optimal regions of the design space. Crucially, this preference model generalizes beyond the training distribution, enabling the discovery of Pareto-optimal solutions outside the observed dataset. We introduce a novel diversity-aware preference guidance, augmenting Pareto dominance preference with diversity criteria. This ensures that generated solutions are optimal and well-distributed across the objective space, a capability absent in prior generative methods for offline multi-objective optimization. We evaluate our approach on various continuous offline multi-objective optimization tasks and find that it consistently outperforms other inverse/generative approaches while remaining competitive with forward/surrogate-based optimization methods. Our results highlight the effectiveness of classifier-guided diffusion models in generating diverse and high-quality solutions that approximate the Pareto front well.