🤖 AI Summary
Prior research lacks a computational, large-scale, longitudinal analytical framework for understanding public trust and distrust in generative artificial intelligence (GenAI). Method: This project constructs the first longitudinal computational trust analysis framework for GenAI, leveraging 197,000 posts from 39 Reddit subreddits (2022–2025) and integrating crowdsourced annotation, natural language processing, and machine learning to identify multidimensional trust/distrust signals and characterize group-level differences (e.g., experts, AI ethicists, general users). Contribution/Results: We find overall equilibrium between trust and distrust; major model releases induce significant attitudinal volatility; and technical performance, usability, and—most frequently—personal experience are key drivers. The framework provides a scalable methodological foundation and empirical evidence for studying the evolution of societal acceptance of GenAI.
📝 Abstract
The rise of generative AI (GenAI) has impacted many aspects of human life. As these systems become embedded in everyday practices, understanding public trust in them also becomes essential for responsible adoption and governance. Prior work on trust in AI has largely drawn from psychology and human-computer interaction, but there is a lack of computational, large-scale, and longitudinal approaches to measuring trust and distrust in GenAI and large language models (LLMs). This paper presents the first computational study of Trust and Distrust in GenAI, using a multi-year Reddit dataset (2022--2025) spanning 39 subreddits and 197,618 posts. Crowd-sourced annotations of a representative sample were combined with classification models to scale analysis. We find that Trust and Distrust are nearly balanced over time, with shifts around major model releases. Technical performance and usability dominate as dimensions, while personal experience is the most frequent reason shaping attitudes. Distinct patterns also emerge across trustors (e.g., experts, ethicists, general users). Our results provide a methodological framework for large-scale Trust analysis and insights into evolving public perceptions of GenAI.