🤖 AI Summary
This paper addresses the challenge of biased average treatment effect (ATE) estimation in causal inference due to unstructured confounders—such as images and text—that are difficult to adjust for. We propose a double machine learning (DML) framework leveraging latent representations from pretrained neural networks. Theoretically, we establish for the first time that high-dimensional latent features extracted by pretrained models satisfy the double robustness condition, enabling valid statistical inference without sparsity or additive structural assumptions; we further characterize their intrinsic robustness to representation non-identifiability and high dimensionality. Our method adaptively captures the latent space’s inherent sparse structure, ensuring fast convergence. Empirical results demonstrate substantial improvements in ATE estimation accuracy and statistical validity under non-tabular confounding settings. The approach establishes a verifiable and generalizable paradigm for multimodal causal analysis.
📝 Abstract
There is growing interest in extending average treatment effect (ATE) estimation to incorporate non-tabular data, such as images and text, which may act as sources of confounding. Neglecting these effects risks biased results and flawed scientific conclusions. However, incorporating non-tabular data necessitates sophisticated feature extractors, often in combination with ideas of transfer learning. In this work, we investigate how latent features from pre-trained neural networks can be leveraged to adjust for sources of confounding. We formalize conditions under which these latent features enable valid adjustment and statistical inference in ATE estimation, demonstrating results along the example of double machine learning. We discuss critical challenges inherent to latent feature learning and downstream parameter estimation arising from the high dimensionality and non-identifiability of representations. Common structural assumptions for obtaining fast convergence rates with additive or sparse linear models are shown to be unrealistic for latent features. We argue, however, that neural networks are largely insensitive to these issues. In particular, we show that neural networks can achieve fast convergence rates by adapting to intrinsic notions of sparsity and dimension of the learning problem.