π€ AI Summary
This study addresses the challenge of variable selection in high-dimensional linear regression, where noise and proxy variables often hinder the simultaneous achievement of accuracy and sparsity. To tackle this issue, the authors propose Boosting with Multiple Testing (BMT), a novel method that integrates a multiple hypothesis testing framework into a forward stepwise boosting procedure. At each iteration, only the most statistically significant variable is selected, and candidate variables are screened using family-wise error rate control. This approach effectively curbs greedy selection of noise or proxy variables and enjoys near-oracle properties, enabling consistent recovery of the true underlying model. The theoretical analysis leverages the multiple testing framework of Chudik et al. (2018) and strong mixing process inequalities from Dendramis et al. (2022). Simulation studies demonstrate that BMT outperforms OCMT and Lasso-type methods in both model selection accuracy and coefficient estimation (measured by RMSE), while empirical applications in macro-finance yield sparse, interpretable, and highly predictive models.
π Abstract
High-dimensional regression specification and analysis is a complex and active area of research in statistics, machine learning, and econometrics. This paper proposes a new approach, Boosting with Multiple Testing (BMT), which combines forward stepwise variable selection with the multiple testing framework of Chudik et al (2018). At each stage, the model is updated by adding only the most significant regressor conditional on those already included, while a family-wise multiple testing filter is applied to the remaining candidates. In this way, the method retains the strong screening properties of Chudik et al (2018) while operating in a less greedy manner with respect to proxy and noise variables. Using sharp probability inequalities for heterogeneous strongly mixing processes from Dendramis et al (2022), we show that BMT enjoys oracle type properties relative to an approximating model that includes all true signals and excludes pure noise variables: this model is selected with probability tending to one, and the resulting estimator achieves standard parametric rates for prediction error and coefficient estimation. Additional results establish conditions under which BMT recovers the exact true model and avoids selection of proxy signals. Monte Carlo experiments indicate that BMT performs very well relative to OCMT and Lasso type procedures, delivering higher model selection accuracy and smaller RMSE for the estimated coefficients, especially under strong multicollinearity of the regressors. Two empirical illustrations based on a large set of macro-financial indicators as covariates, show that BMT yields sparse, interpretable specifications with favourable out-of-sample performance.