The Adoption and Efficacy of Large Language Models: Evidence From Consumer Complaints in the Financial Industry

📅 2023-11-28
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the real-world adoption and causal impact of large language models (LLMs) in financial consumer complaint resolution. Leveraging over one million authentic complaint records from 2015–2024, we integrate large-scale text analytics and NLP-based evaluation to construct a novel instrumental variable—exploiting temporal and geographic variation in LLM accessibility—to identify the causal effect of LLM usage (e.g., ChatGPT) on complaint outcomes. We further validate findings via a randomized controlled trial. Results show that LLM-assisted complaint drafting significantly enhances narrative clarity and rhetorical persuasiveness, increasing consumers’ probability of receiving redress from financial institutions by 18.3% on average (p < 0.01). Our contribution lies in pioneering the integration of quasi-natural experimental design with controlled experimentation, delivering the first robust causal evidence on AI-enabled consumer protection. This work provides empirically grounded insights for technology-inclusive policy formulation and regulatory frameworks governing AI’s role in financial consumer rights enforcement.
📝 Abstract
Large Language Models (LLMs) are reshaping consumer decision-making, particularly in communication with firms, yet our understanding of their impact remains limited. This research explores the effect of LLMs on consumer complaints submitted to the Consumer Financial Protection Bureau from 2015 to 2024, documenting the adoption of LLMs for drafting complaints and evaluating the likelihood of obtaining relief from financial firms. We analyzed over 1 million complaints and identified a significant increase in LLM usage following the release of ChatGPT. We find that LLM usage is associated with an increased likelihood of obtaining relief from financial firms. To investigate this relationship, we employ an instrumental variable approach to mitigate endogeneity concerns around LLM adoption. Although instrumental variables suggest a potential causal link, they cannot fully capture all unobserved heterogeneity. To further establish this causal relationship, we conducted controlled experiments, which support that LLMs can enhance the clarity and persuasiveness of consumer narratives, thereby increasing the likelihood of obtaining relief. Our findings suggest that facilitating access to LLMs can help firms better understand consumer concerns and level the playing field among consumers. This underscores the importance of policies promoting technological accessibility, enabling all consumers to effectively voice their concerns.
Problem

Research questions and friction points this paper is trying to address.

Impact of LLMs on consumer complaints
LLM usage increases relief likelihood
Enhancing clarity and persuasiveness with LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models analyze complaints
Instrumental variable addresses endogeneity
Controlled experiments test narrative effectiveness
🔎 Similar Papers
No similar papers found.