Sharp Testable Implications of Encouragement Designs

📅 2024-11-14
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the identification problem of discrete multivalued treatments under instrumental variables (IV) within the potential outcomes framework, focusing on the testable core assumption of “monotonicity in discrete settings”—i.e., each instrument value encourages at most one treatment choice. We derive and rigorously prove a tight, closed-form inequality that is both necessary and sufficient for monotonicity under encouragement designs. Through constructive proof and distributional inequality derivation, we develop a novel analytical approach that not only tests the validity of monotonicity but also precisely identifies alternative treatment pathways responsible for violations. The resulting inequality is algebraically simple and directly applicable to empirical testing. In applications, it successfully detects violation patterns and pinpoints specific substitution mechanisms, thereby substantially enhancing the credibility and interpretability of discrete IV identification.

Technology Category

Application Category

📝 Abstract
This paper studies a potential outcome model with a continuous or discrete outcome, a discrete multi-valued treatment, and a discrete multi-valued instrument. We derive sharp, closed-form testable implications for a class of restrictions on potential treatments where each value of the instrument encourages towards at most one unique treatment choice; such restrictions serve as the key identifying assumption in several prominent recent empirical papers. Borrowing the terminology used in randomized experiments, we call such a setting an encouragement design. The testable implications are inequalities in terms of the conditional distributions of choices and the outcome given the instrument. Through a novel constructive argument, we show these inequalities are sharp in the sense that any distribution of the observed data that satisfies these inequalities is compatible with this class of restrictions on potential treatments. Based on these inequalities, we propose tests of the restrictions. In an empirical application, we show some of these restrictions are violated and pinpoint the substitution pattern that leads to the violation.
Problem

Research questions and friction points this paper is trying to address.

Testing identifying assumptions in encouragement designs with discrete treatments
Deriving sharp inequalities for potential outcome model validation
Detecting violations of instrument-based treatment restrictions empirically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Derives sharp testable implications for encouragement designs
Uses inequalities from choice and outcome distributions
Proposes tests for restrictions on potential treatments
🔎 Similar Papers
No similar papers found.