🤖 AI Summary
Offline model-based optimization (MBO) faces a core challenge: optimizing expensive or non-queryable black-box functions solely from static datasets, which risks epistemic uncertainty due to extrapolation—leading to reward hacking or out-of-distribution (OOD) misoptimization. This work establishes the first unified framework for offline MBO, innovatively decoupling it into surrogate modeling and generative modeling pathways, comprehensively covering single- and multi-objective settings, benchmarking, and paradigm evolution. Methodologically, it integrates deep surrogate models, generative design search, uncertainty quantification, OOD-robust training, and Pareto-front evaluation. We propose a standardized benchmark suite that exposes fundamental deficiencies of existing methods in safe optimization and OOD generalization. Crucially, we identify three frontier research directions: distributional generalization beyond training support, safety-constrained optimization, and controllability of superintelligent systems—providing a trustworthy methodological foundation for applications in protein engineering, materials discovery, and related domains.
📝 Abstract
Offline optimization is a fundamental challenge in science and engineering, where the goal is to optimize black-box functions using only offline datasets. This setting is particularly relevant when querying the objective function is prohibitively expensive or infeasible, with applications spanning protein engineering, material discovery, neural architecture search, and beyond. The main difficulty lies in accurately estimating the objective landscape beyond the available data, where extrapolations are fraught with significant epistemic uncertainty. This uncertainty can lead to objective hacking(reward hacking), exploiting model inaccuracies in unseen regions, or other spurious optimizations that yield misleadingly high performance estimates outside the training distribution. Recent advances in model-based optimization(MBO) have harnessed the generalization capabilities of deep neural networks to develop offline-specific surrogate and generative models. Trained with carefully designed strategies, these models are more robust against out-of-distribution issues, facilitating the discovery of improved designs. Despite its growing impact in accelerating scientific discovery, the field lacks a comprehensive review. To bridge this gap, we present the first thorough review of offline MBO. We begin by formalizing the problem for both single-objective and multi-objective settings and by reviewing recent benchmarks and evaluation metrics. We then categorize existing approaches into two key areas: surrogate modeling, which emphasizes accurate function approximation in out-of-distribution regions, and generative modeling, which explores high-dimensional design spaces to identify high-performing designs. Finally, we examine the key challenges and propose promising directions for advancement in this rapidly evolving field including safe control of superintelligent systems.