Few-Shot Design Optimization by Exploiting Auxiliary Information

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of few-shot optimization of expensive black-box functions in real-world scenarios, particularly when high-dimensional auxiliary information and data from multiple historical tasks are available. The authors propose a context-aware neural prediction architecture that jointly models high-dimensional auxiliary observations $h(x)$ and cross-task historical data to efficiently predict the performance $f(x)$ of new design candidates. By integrating few-shot learning, context-conditioned prediction, and multi-task optimization within a neural framework, the method overcomes the information utilization bottleneck of conventional Bayesian optimization. Empirical results on robotic hardware design and neural network hyperparameter tuning demonstrate significant improvements over existing approaches, achieving more accurate performance prediction and faster convergence. The study also introduces and open-sources a new benchmark for hardware design optimization.

Technology Category

Application Category

📝 Abstract
Many real-world design problems involve optimizing an expensive black-box function $f(x)$, such as hardware design or drug discovery. Bayesian Optimization has emerged as a sample-efficient framework for this problem. However, the basic setting considered by these methods is simplified compared to real-world experimental setups, where experiments often generate a wealth of useful information. We introduce a new setting where an experiment generates high-dimensional auxiliary information $h(x)$ along with the performance measure $f(x)$; moreover, a history of previously solved tasks from the same task family is available for accelerating optimization. A key challenge of our setting is learning how to represent and utilize $h(x)$ for efficiently solving new optimization tasks beyond the task history. We develop a novel approach for this setting based on a neural model which predicts $f(x)$ for unseen designs given a few-shot context containing observations of $h(x)$. We evaluate our method on two challenging domains, robotic hardware design and neural network hyperparameter tuning, and introduce a novel design problem and large-scale benchmark for the former. On both domains, our method utilizes auxiliary feedback effectively to achieve more accurate few-shot prediction and faster optimization of design tasks, significantly outperforming several methods for multi-task optimization.
Problem

Research questions and friction points this paper is trying to address.

Few-Shot Design Optimization
Auxiliary Information
Black-Box Optimization
Multi-Task Optimization
Bayesian Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-Shot Optimization
Auxiliary Information
Bayesian Optimization
Multi-Task Learning
Neural Design Prediction
🔎 Similar Papers
No similar papers found.