🤖 AI Summary
Conventional simulation-based inference (SBI) fails under model misspecification, undermining statistical validity. Method: We propose a robust frequentist inference framework that defines a projection parameter—the best approximation of the true data-generating distribution within a misspecified model family—and employs an exponentially tilted extension to mitigate misspecification bias. We further develop a distribution-projection-based goodness-of-fit test and integrate active learning sampling with closed-form approximate modeling to enhance computational efficiency. Contributions/Results: (1) Under misspecification—even when standard regularity conditions fail—we construct asymptotically valid and efficient frequentist confidence sets; (2) The method substantially improves exploration efficiency in parameter space; (3) It provides a verifiable diagnostic tool for detecting and quantifying model misspecification. This framework delivers both theoretical guarantees and practical implementation for reliable SBI in realistic, non-ideal modeling scenarios.
📝 Abstract
Simulation-Based Inference (SBI) is an approach to statistical inference where simulations from an assumed model are used to construct estimators and confidence sets. SBI is often used when the likelihood is intractable and to construct confidence sets that do not rely on asymptotic methods or regularity conditions. Traditional SBI methods assume that the model is correct, but, as always, this can lead to invalid inference when the model is misspecified. This paper introduces robust methods that allow for valid frequentist inference in the presence of model misspecification. We propose a framework where the target of inference is a projection parameter that minimizes a discrepancy between the true distribution and the assumed model. The method guarantees valid inference, even when the model is incorrectly specified and even if the standard regularity conditions fail. Alternatively, we introduce model expansion through exponential tilting as another way to account for model misspecification. We also develop an SBI based goodness-of-fit test to detect model misspecification. Finally, we propose two ideas that are useful in the SBI framework beyond robust inference: an SBI based method to obtain closed form approximations of intractable models and an active learning approach to more efficiently sample the parameter space.