🤖 AI Summary
This paper addresses two key challenges in causal inference from text data: (1) difficulty in controlling for confounding variables, and (2) overlap bias arising when large language models (LLMs) inadvertently encode treatment status into their representations. We propose a novel, experiment-design–centric approach that isolates the causal effect of linguistic features—such as humble expression—on audience attitudes or behaviors via structured textual interventions, thereby circumventing representation-level confounding induced by model-based adjustment. Our method integrates randomized controlled trials, bag-of-words baselines, and formal causal identification frameworks to enable unbiased estimation of text treatment effects. Empirically, in political communication, our design robustly identifies the persuasive effect of humble expression. Crucially, we find that current LLM representations underperform traditional shallow representations in causal identification—revealing a “stronger representation, weaker causality” paradox. This work establishes a reproducible, interpretable causal evaluation paradigm for social media interventions and policy communication.
📝 Abstract
Many social science questions ask how linguistic properties causally affect an audience's attitudes and behaviors. Because text properties are often interlinked (e.g., angry reviews use profane language), we must control for possible latent confounding to isolate causal effects. Recent literature proposes adapting large language models (LLMs) to learn latent representations of text that successfully predict both treatment and the outcome. However, because the treatment is a component of the text, these deep learning methods risk learning representations that actually encode the treatment itself, inducing overlap bias. Rather than depending on post-hoc adjustments, we introduce a new experimental design that handles latent confounding, avoids the overlap issue, and unbiasedly estimates treatment effects. We apply this design in an experiment evaluating the persuasiveness of expressing humility in political communication. Methodologically, we demonstrate that LLM-based methods perform worse than even simple bag-of-words models using our real text and outcomes from our experiment. Substantively, we isolate the causal effect of expressing humility on the perceived persuasiveness of political statements, offering new insights on communication effects for social media platforms, policy makers, and social scientists.