🤖 AI Summary
This study addresses the limitation of traditional conditional treatment effect estimation, which is confined to scalar outcomes and struggles with multidimensional, ordinal, or merely rankable preference-based outcomes. The authors propose a Preference-based Conditional Treatment Effect (CPTE) framework that characterizes heterogeneous treatment effects by modeling the ordinal relationships among outcomes and enables policy learning. This framework unifies various preference-driven causal estimands for the first time, establishes novel identifiability conditions with clear interpretability, and integrates matching, quantile regression, and distributional regression to construct an efficient influence-function-based estimator. This estimator corrects plug-in bias and enhances policy value. Experiments on synthetic and semi-synthetic data demonstrate that the proposed method significantly outperforms existing approaches, confirming its effectiveness and practical potential.
📝 Abstract
We introduce a new preference-based framework for conditional treatment effect estimation and policy learning, built on the Conditional Preference-based Treatment Effect (CPTE). CPTE requires only that outcomes be ranked under a preference rule, unlocking flexible modeling of heterogeneous effects with multivariate, ordinal, or preference-driven outcomes. This unifies applications such as conditional probability of necessity and sufficiency, conditional Win Ratio, and Generalized Pairwise Comparisons. Despite the intrinsic non-identifiability of comparison-based estimands, CPTE provides interpretable targets and delivers new identifiability conditions for previous unidentifiable estimands. We present estimation strategies via matching, quantile, and distributional regression, and further design efficient influence-function estimators to correct plug-in bias and maximize policy value. Synthetic and semi-synthetic experiments demonstrate clear performance gains and practical impact.