🤖 AI Summary
In offline multi-objective optimization, generative models systematically underperform evolutionary algorithms on metrics such as generational distance due to constraints imposed by the offline data distribution. This work introduces, for the first time, the concept of “offline front shift,” framing this limitation as a distribution-shift-constrained optimization problem. Integral probability metrics are employed as a theoretical tool to diagnose the failure modes of generative approaches. Through empirical analysis using diffusion models and other generative methods combined with objective-space distribution characterization, the study reveals that generative models tend to produce overly conservative samples that fail to adequately cover the true Pareto front. Experimental results confirm that offline front shift is a key bottleneck in performance, offering a novel perspective for advancing offline multi-objective optimization.
📝 Abstract
Offline multi-objective optimization (MOO) aims to recover Pareto-optimal designs given a finite, static dataset. Recent generative approaches, including diffusion models, show strong performance under hypervolume, yet their behavior under other established MOO metrics is less understood. We show that generative methods systematically underperform evolutionary alternatives with respect to other metrics, such as generational distance. We relate this failure mode to the offline-frontier shift, i.e., the displacement of the offline dataset from the Pareto front, which acts as a fundamental limitation in offline MOO. We argue that overcoming this limitation requires out-of-distribution sampling in objective space (via an integral probability metric) and empirically observe that generative methods remain conservatively close to the offline objective distribution. Our results position offline MOO as a distribution-shift--limited problem and provide a diagnostic lens for understanding when and why generative optimization methods fail.