π€ AI Summary
This paper addresses causal identification bias in staggered difference-in-differences (DiD) arising from time-varying treatment misclassification and policy anticipation effects. Methodologically, we (1) construct an average treatment effect on the treated (ATT) estimator that jointly accounts for observed and true treatment timing; (2) develop a joint testing procedure to precisely identify the timing and magnitude of misclassification and anticipation; and (3) embed time-varying misclassification and anticipatory behavior models into the DiD framework, deriving bias-corrected estimators and robust inference procedures. Empirically, applying our framework to evaluate Indonesiaβs anti-cheating policy corrects substantial bias in conventional staggered DiD estimates, demonstrating both theoretical validity and practical applicability.
π Abstract
This paper examines the identification and estimation of treatment effects in staggered adoption designs -- a common extension of the canonical Difference-in-Differences (DiD) model to multiple groups and time-periods -- in the presence of (time varying) misclassification of the treatment status as well as of anticipation. We demonstrate that standard estimators are biased with respect to commonly used causal parameters of interest under such forms of misspecification. To address this issue, we provide modified estimators that recover the Average Treatment Effect of observed and true switching units, respectively. Additionally, we suggest a testing procedure aimed at detecting the timing and extent of misclassification and anticipation effects. We illustrate the proposed methods with an application to the effects of an anti-cheating policy on school mean test scores in high stakes national exams in Indonesia.