Bootstrap Diagnostic Tests

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Classical limit theorems (e.g., the Central Limit Theorem) often fail in practice due to violated assumptions, leading to invalid statistical inference. To address this, we propose a general bootstrap-based diagnostic method: it constructs an empirical distribution of a test statistic via resampling and quantifies its deviation from the limiting Gaussian distribution, thereby detecting model misspecification—such as weak instruments or nonstationary time series. Our key contribution lies in leveraging the intrinsic randomness of the bootstrap to construct a test statistic that is free from pre-testing bias, features universally valid critical values, and possesses uniform asymptotic power against deviations from asymptotic normality—thus overcoming reliance on asymptotic approximations. Theoretical analysis is conducted under an i.i.d. asymptotic framework; simulations and empirical applications demonstrate high power and robustness, establishing the method as a reliable, plug-and-play diagnostic tool for statistical inference.

Technology Category

Application Category

📝 Abstract
Violation of the assumptions underlying classical (Gaussian) limit theory frequently leads to unreliable statistical inference. This paper shows the novel result that the bootstrap can detect such violation by means of simple and powerful tests which (a) induce no pre-testing bias, (b) can be performed using the same critical values in a broad range of applications, and (c) are consistent against deviations from asymptotic normality. By focusing on the discrepancy between the conditional distribution of a bootstrap statistic and the (limiting) Gaussian distribution which obtains under valid specification, we show how to assess whether this discrepancy is large enough to indicate specification invalidity. The method, which is computationally straightforward, only requires to measure the discrepancy between the bootstrap and the Gaussian distributions based on a sample of i.i.d. draws of the bootstrap statistic. We derive sufficient conditions for the randomness in the data to mix with the randomness in the bootstrap repetitions in a way such that (a), (b) and (c) above hold. To demonstrate the practical relevance and broad applicability of our diagnostic procedure, we discuss five scenarios where the asymptotic Gaussian approximation may fail: (i) weak instruments in instrumental variable regression; (ii) non-stationarity in autoregressive time series; (iii) parameters near or at the boundary of the parameter space; (iv) infinite variance innovations in a location model for i.i.d. data; (v) invalidity of the delta method due to (near-)rank deficiency in the implied Jacobian matrix. An illustration drawn from the empirical macroeconomic literature concludes.
Problem

Research questions and friction points this paper is trying to address.

Detecting violations of classical Gaussian assumptions in statistical inference
Developing bootstrap tests without pre-testing bias using consistent critical values
Assessing distribution discrepancies to identify specification invalidity in models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bootstrap detects assumption violations via tests
Measures discrepancy between bootstrap and Gaussian distributions
Applies to various scenarios with Gaussian approximation failures
🔎 Similar Papers
No similar papers found.
G
Giuseppe Cavaliere
Department of Economics, University of Bologna, Italy; Department of Economics, University of Exeter, UK
L
Luca Fanelli
Department of Economics, University of Bologna, Italy
Iliyan Georgiev
Iliyan Georgiev
Adobe Research
Computer GraphicsGlobal IlluminationRay TracingMonte CarloStochastic Sampling