A Design-based Solution for Causal Inference with Text: Can a Language Model Be Too Large?

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two key challenges in causal inference from text data: (1) difficulty in controlling for confounding variables, and (2) overlap bias arising when large language models (LLMs) inadvertently encode treatment status into their representations. We propose a novel, experiment-design–centric approach that isolates the causal effect of linguistic features—such as humble expression—on audience attitudes or behaviors via structured textual interventions, thereby circumventing representation-level confounding induced by model-based adjustment. Our method integrates randomized controlled trials, bag-of-words baselines, and formal causal identification frameworks to enable unbiased estimation of text treatment effects. Empirically, in political communication, our design robustly identifies the persuasive effect of humble expression. Crucially, we find that current LLM representations underperform traditional shallow representations in causal identification—revealing a “stronger representation, weaker causality” paradox. This work establishes a reproducible, interpretable causal evaluation paradigm for social media interventions and policy communication.

Technology Category

Application Category

📝 Abstract
Many social science questions ask how linguistic properties causally affect an audience's attitudes and behaviors. Because text properties are often interlinked (e.g., angry reviews use profane language), we must control for possible latent confounding to isolate causal effects. Recent literature proposes adapting large language models (LLMs) to learn latent representations of text that successfully predict both treatment and the outcome. However, because the treatment is a component of the text, these deep learning methods risk learning representations that actually encode the treatment itself, inducing overlap bias. Rather than depending on post-hoc adjustments, we introduce a new experimental design that handles latent confounding, avoids the overlap issue, and unbiasedly estimates treatment effects. We apply this design in an experiment evaluating the persuasiveness of expressing humility in political communication. Methodologically, we demonstrate that LLM-based methods perform worse than even simple bag-of-words models using our real text and outcomes from our experiment. Substantively, we isolate the causal effect of expressing humility on the perceived persuasiveness of political statements, offering new insights on communication effects for social media platforms, policy makers, and social scientists.
Problem

Research questions and friction points this paper is trying to address.

Estimating causal effects of text properties on audience attitudes
Addressing latent confounding in text-based causal inference
Avoiding overlap bias when using language models for inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Design-based solution avoids overlap bias
Controls latent confounding without post-hoc adjustments
Unbiasedly estimates causal effects of text properties
🔎 Similar Papers
No similar papers found.
G
Graham Tierney
Netflix
S
Srikar Katta
Department of Computer Science, Duke University
Christopher Bail
Christopher Bail
Professor of Sociology and Public Policy, Duke University
Artificial IntelligenceSocial MediaPolitical PolarizationComputational Social Science
S
Sunshine Hillygus
Department of Political Science, Duke University
Alexander Volfovsky
Alexander Volfovsky
Duke University