🤖 AI Summary
This work addresses the challenge of selecting regularization hyperparameters in conditional moment models when the smoothness of the unknown perturbation function is unavailable. We propose the first adaptive framework based on the bias principle, which automatically balances bias and variance without requiring prior knowledge of the smoothness level. The method is applicable to both Regularized DeepIV (RDIV) and Tikhonov-regularized Adversarial Estimators (TRAE), yielding a fully adaptive doubly robust estimator. Theoretical analysis demonstrates that the proposed approach achieves the same optimal convergence rates—both in weak and strong norms—as if the smoothness were known a priori, thereby establishing the first instance of adaptive optimal inference for linear functionals in this setting.
📝 Abstract
We study adaptive estimation and inference in ill-posed linear inverse problems defined by conditional moment restrictions. Existing regularized estimators such as Regularized DeepIV (RDIV) require prior knowledge of the smoothness of the nuisance function, typically encoded by a beta source condition to tune their regularization parameters. In practice, this smoothness is unknown, and misspecified hyperparameters can lead to suboptimal convergence or instability.
We introduce a discrepancy-principle-based framework for adaptive hyperparameter selection that automatically balances bias and variance without relying on the unknown smoothness parameter. Our framework applies to both RDIV (Li et al. [2024]) and the Tikhonov Regularized Adversarial Estimator (TRAE) (Bennett et al. [2023a]) and achieves the same rates in both weak and strong metrics. Building on this, we construct a fully adaptive doubly robust estimator for linear functionals that attains the optimal rate of the better-conditioned primal or dual problem, providing a practical, theoretically grounded approach for adaptive inference in ill-posed econometric models.