In Generative AI We (Dis)Trust? Computational Analysis of Trust and Distrust in Reddit Discussions

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior research lacks a computational, large-scale, longitudinal analytical framework for understanding public trust and distrust in generative artificial intelligence (GenAI). Method: This project constructs the first longitudinal computational trust analysis framework for GenAI, leveraging 197,000 posts from 39 Reddit subreddits (2022–2025) and integrating crowdsourced annotation, natural language processing, and machine learning to identify multidimensional trust/distrust signals and characterize group-level differences (e.g., experts, AI ethicists, general users). Contribution/Results: We find overall equilibrium between trust and distrust; major model releases induce significant attitudinal volatility; and technical performance, usability, and—most frequently—personal experience are key drivers. The framework provides a scalable methodological foundation and empirical evidence for studying the evolution of societal acceptance of GenAI.

Technology Category

Application Category

📝 Abstract
The rise of generative AI (GenAI) has impacted many aspects of human life. As these systems become embedded in everyday practices, understanding public trust in them also becomes essential for responsible adoption and governance. Prior work on trust in AI has largely drawn from psychology and human-computer interaction, but there is a lack of computational, large-scale, and longitudinal approaches to measuring trust and distrust in GenAI and large language models (LLMs). This paper presents the first computational study of Trust and Distrust in GenAI, using a multi-year Reddit dataset (2022--2025) spanning 39 subreddits and 197,618 posts. Crowd-sourced annotations of a representative sample were combined with classification models to scale analysis. We find that Trust and Distrust are nearly balanced over time, with shifts around major model releases. Technical performance and usability dominate as dimensions, while personal experience is the most frequent reason shaping attitudes. Distinct patterns also emerge across trustors (e.g., experts, ethicists, general users). Our results provide a methodological framework for large-scale Trust analysis and insights into evolving public perceptions of GenAI.
Problem

Research questions and friction points this paper is trying to address.

Measuring trust and distrust in generative AI using computational methods
Analyzing public perceptions through large-scale Reddit discussions
Identifying key factors influencing attitudes toward AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used multi-year Reddit dataset for analysis
Combined crowd-sourced annotations with classification models
Developed methodological framework for trust analysis
🔎 Similar Papers
No similar papers found.
Aria Pessianzadeh
Aria Pessianzadeh
Drexel University
Computational social scienceNLPSocial media analysis
N
Naima Sultana
Drexel University
H
Hildegarde Van den Bulck
Drexel University
David Gefen
David Gefen
Professor of MIS Drexel University
TrustResearch MethodsEcommerceIT OutsourcingSociolinguistics
S
Shahin Jabari
Drexel University
R
Rezvaneh Rezapour
Drexel University