🤖 AI Summary
This paper addresses the conceptual ambiguity and inconsistent standards surrounding quasi-experimental design, focusing on the Difference-in-Differences (DID) estimator as a canonical case. It systematically distinguishes two prevailing definitional approaches in the literature: “uncontroversial assumptions” versus “controllable unobserved confounding.” Through rigorous causal inference theory and logical analysis, the study proposes that the **plausibility of core identifying assumptions**—rather than model flexibility or robustness adjustments—constitutes the fundamental criterion for assessing quasi-experimental credibility. The resulting framework integrates philosophical grounding with statistical operationality. The analysis clarifies the boundary conditions for valid DID causal inference, underscoring that its strong inferential power hinges on the substantive validity of exogeneity assumptions—not merely technical robustness. These findings establish a unified, theoretically grounded benchmark for classifying, evaluating, and critically reflecting upon quasi-experimental designs.
📝 Abstract
Study designs classified as quasi- or natural experiments are typically accorded more face validity than observational study designs more broadly. However, there is ambiguity in the literature about what qualifies as a quasi-experiment. Here, we attempt to resolve this ambiguity by distinguishing two different ways of defining this term. One definition is based on identifying assumptions being uncontroversial, and the other is based on the ability to account for unobserved sources of confounding (under assumptions). We argue that only the former deserves an additional measure of credibility for reasons of design. We use the difference-in-differences approach to illustrate our discussion.