Orthogonal Representation Learning for Estimating Causal Quantities

๐Ÿ“… 2025-02-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing causal representation learning methods enable end-to-end estimation but lack the double robustness and near-oracle efficiency of Neyman-orthogonal learners, and their reliance on explicit balancing constraints often leads to inconsistent estimation. Method: We propose Orthogonal Representation Learning (OR-learners), the first framework unifying Neyman orthogonality with representation learning at the latent feature levelโ€”enabling consistent, doubly robust, and semiparametrically efficient estimation of causal quantities (e.g., CATE) under arbitrary representations, without requiring explicit balancing constraints. Contribution/Results: We establish theoretical guarantees of consistency and semiparametric efficiency. Empirically, OR-learners significantly outperform state-of-the-art methods across multiple benchmarks, achieving the current best performance in causal inference.

Technology Category

Application Category

๐Ÿ“ Abstract
Representation learning is widely used for estimating causal quantities (e.g., the conditional average treatment effect) from observational data. While existing representation learning methods have the benefit of allowing for end-to-end learning, they do not have favorable theoretical properties of Neyman-orthogonal learners, such as double robustness and quasi-oracle efficiency. Also, such representation learning methods often employ additional constraints, like balancing, which may even lead to inconsistent estimation. In this paper, we propose a novel class of Neyman-orthogonal learners for causal quantities defined at the representation level, which we call OR-learners. Our OR-learners have several practical advantages: they allow for consistent estimation of causal quantities based on any learned representation, while offering favorable theoretical properties including double robustness and quasi-oracle efficiency. In multiple experiments, we show that, under certain regularity conditions, our OR-learners improve existing representation learning methods and achieve state-of-the-art performance. To the best of our knowledge, our OR-learners are the first work to offer a unified framework of representation learning methods and Neyman-orthogonal learners for causal quantities estimation.
Problem

Research questions and friction points this paper is trying to address.

Estimating causal quantities from observational data
Improving representation learning with Neyman-orthogonal properties
Providing consistent and efficient causal estimation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neyman-orthogonal learners
double robustness
quasi-oracle efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.