NeuroPareto: Calibrated Acquisition for Costly Many-Goal Search in Vast Parameter Spaces

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of high evaluation costs and limited computational resources in many-objective optimization by proposing a novel approach that integrates uncertainty disentanglement with history-conditioned acquisition. The method employs a calibrated Bayesian classifier to estimate epistemic uncertainty in non-dominated ranks and leverages deep Gaussian processes to decompose predictive uncertainty into reducible and irreducible components. An online hypervolume-driven acquisition network is designed to guide efficient evaluations, innovatively combining disentangled uncertainty estimates with historical hypervolume improvement within the acquisition function. A hierarchical screening strategy and an amortized surrogate update mechanism are introduced to significantly reduce computational overhead while preserving convergence and diversity. Experimental results on DTLZ, ZDT benchmarks, and a real-world subsurface energy extraction task demonstrate substantial improvements in both Pareto proximity and hypervolume metrics over state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
The pursuit of optimal trade-offs in high-dimensional search spaces under stringent computational constraints poses a fundamental challenge for contemporary multi-objective optimization. We develop NeuroPareto, a cohesive architecture that integrates rank-centric filtering, uncertainty disentanglement, and history-conditioned acquisition strategies to navigate complex objective landscapes. A calibrated Bayesian classifier estimates epistemic uncertainty across non-domination tiers, enabling rapid generation of high-quality candidates with minimal evaluation cost. Deep Gaussian Process surrogates further separate predictive uncertainty into reducible and irreducible components, providing refined predictive means and risk-aware signals for downstream selection. A lightweight acquisition network, trained online from historical hypervolume improvements, guides expensive evaluations toward regions balancing convergence and diversity. With hierarchical screening and amortized surrogate updates, the method maintains accuracy while keeping computational overhead low. Experiments on DTLZ and ZDT suites and a subsurface energy extraction task show that NeuroPareto consistently outperforms classifier-enhanced and surrogate-assisted baselines in Pareto proximity and hypervolume.
Problem

Research questions and friction points this paper is trying to address.

multi-objective optimization
high-dimensional search spaces
Pareto front
computational constraints
expensive evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

NeuroPareto
Bayesian classifier
Deep Gaussian Process
uncertainty disentanglement
hypervolume-based acquisition
🔎 Similar Papers
No similar papers found.