FIRE: Multi-fidelity Regression with Distribution-conditioned In-context Learning using Tabular Foundation Models

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of extreme sparsity in high-fidelity data within multi-fidelity regression, which renders Gaussian process surrogates computationally expensive and prone to overfitting. To overcome this, the authors propose FIRE, a novel framework that, for the first time, integrates Tabular Foundation Models (TFMs) into multi-fidelity regression. FIRE enables zero-shot residual correction without any training by leveraging in-context Bayesian inference and distributional conditioning: it constructs a high-fidelity correction model conditioned on the posterior predictive distribution of a low-fidelity model, thereby effectively transferring cross-fidelity information and capturing heteroscedastic errors. Evaluated across 31 benchmark tasks, FIRE significantly outperforms seven state-of-the-art methods in prediction accuracy, uncertainty quantification, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Multi-fidelity (MF) regression often operates in regimes of extreme data imbalance, where the commonly-used Gaussian-process (GP) surrogates struggle with cubic scaling costs and overfit to sparse high-fidelity observations, limiting efficiency and generalization in real-world applications. We introduce FIRE, a training-free MF framework that couples tabular foundation models (TFMs) to perform zero-shot in-context Bayesian inference via a high-fidelity correction model conditioned on the low-fidelity model's posterior predictive distributions. This cross-fidelity information transfer via distributional summaries captures heteroscedastic errors, enabling robust residual learning without model retraining. Across 31 benchmark problems spanning synthetic and real-world tasks (e.g., DrivAerNet, LCBench), FIRE delivers a stronger performance-time trade-off than seven state-of-the-art GP-based or deep learning MF regression methods, ranking highest in accuracy and uncertainty quantification with runtime advantages. Limitations include context window constraints and dependence on the quality of the pre-trained TFM's.
Problem

Research questions and friction points this paper is trying to address.

multi-fidelity regression
data imbalance
Gaussian process
overfitting
computational scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-fidelity regression
tabular foundation models
in-context learning
distribution-conditioned inference
zero-shot Bayesian inference
🔎 Similar Papers
No similar papers found.
R
Rosen Ting-Ying Yu
Center for Computational Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
N
Nicholas Sung
Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
Faez Ahmed
Faez Ahmed
Associate Professor, MIT
Generative AIEngineering DesignMachine LearningEngineering OptimizationData-driven Design