Does Unsupervised Domain Adaptation Improve the Robustness of Amortized Bayesian Inference? A Systematic Evaluation

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether unsupervised domain adaptation (UDA) can improve the robustness of neural networks—trained solely on synthetic data—to real-world observations under simulation-to-reality domain shift, focusing on reliability in Bayesian inference. Method: We construct controllable low- and high-dimensional simulation benchmarks and systematically inject diverse domain mismatches—including unmodeled noise, model imperfections, and prior mismatch—and evaluate UDA methods that align both embedding and summary-statistic spaces. Contribution/Results: We find that UDA significantly enhances the robustness of simulation-based Approximate Bayesian Inference (ABI) to observational noise and model error; however, it degrades performance under prior mismatch—a previously unreported “mismatch-type sensitivity” of UDA. This work delineates the effective boundaries and failure modes of UDA in ABI, providing theoretical insights and practical guidelines for deploying simulation-trained models in scientific inference.

Technology Category

Application Category

📝 Abstract
Neural networks are fragile when confronted with data that significantly deviates from their training distribution. This is true in particular for simulation-based inference methods, such as neural amortized Bayesian inference (ABI), where models trained on simulated data are deployed on noisy real-world observations. Recent robust approaches employ unsupervised domain adaptation (UDA) to match the embedding spaces of simulated and observed data. However, the lack of comprehensive evaluations across different domain mismatches raises concerns about the reliability in high-stakes applications. We address this gap by systematically testing UDA approaches across a wide range of misspecification scenarios in both a controlled and a high-dimensional benchmark. We demonstrate that aligning summary spaces between domains effectively mitigates the impact of unmodeled phenomena or noise. However, the same alignment mechanism can lead to failures under prior misspecifications - a critical finding with practical consequences. Our results underscore the need for careful consideration of misspecification types when using UDA techniques to increase the robustness of ABI in practice.
Problem

Research questions and friction points this paper is trying to address.

Evaluates UDA in ABI robustness
Tests UDA across domain mismatches
Highlights prior misspecification risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

UDA enhances ABI robustness
Aligns simulated and observed data
Addresses domain mismatch systematically
🔎 Similar Papers
No similar papers found.
L
Lasse Elsemuller
Heidelberg University
V
Valentin Pratz
Heidelberg University, Zuse School ELIZA
Mischa von Krause
Mischa von Krause
Post-Doctoral Researcher, Heidelberg University
cognitive modelingindividual differences
A
Andreas Voss
Heidelberg University
P
Paul-Christian Burkner
TU Dortmund University
Stefan T. Radev
Stefan T. Radev
Assistant Professor, Rensselaer Polytechnic Institute
Deep LearningBayesian StatisticsStochastic ModelsMachine LearningCognitive Modeling