🤖 AI Summary
This paper addresses how treatment-group selection threatens the parallel trends assumption in Difference-in-Differences (DiD) estimation—a critical yet under-characterized identification challenge. Method: We formally characterize the empirical content of this threat and derive necessary and sufficient conditions for parallel trends to hold under general selection mechanisms. We propose a “selection-driven bias decomposition framework” that systematically partitions DiD estimation bias into selection effects and time-varying heterogeneity effects, and develop operational benchmarking strategies—both with and without covariates—grounded in causal inference theory, selection modeling, and sensitivity analysis. Contribution/Results: Applied to the National Supported Work (NSW) experiment reanalysis, our approach quantifies and corrects selection bias, substantially improving the credibility of DiD estimates and the robustness of causal conclusions.
📝 Abstract
We study the role of selection into treatment in difference-in-differences (DiD) designs. We derive necessary and sufficient conditions for parallel trends assumptions under general classes of selection mechanisms. These conditions characterize the empirical content of parallel trends. We use the necessary conditions to provide a selection-based decomposition of the bias of DiD and provide easy-to-implement strategies for benchmarking its components. We also provide templates for justifying DiD in applications with and without covariates. A reanalysis of the causal effect of NSW training programs demonstrates the usefulness of our selection-based approach to benchmarking the bias of DiD.