🤖 AI Summary
Empirical evidence remains scarce regarding how AI-generated credibility signals influence public epistemic judgments about political news. This study addresses this gap via a large-scale, mixed-methods experiment (N = 3,217), experimentally manipulating AI-provided credibility scores and comparing their effects against institutional authority labels and user engagement metrics (e.g., likes, shares), while integrating behavioral responses and psychometric measures. Results demonstrate that AI credibility signals significantly attenuate partisan bias and institutional distrust—outperforming traditional engagement indicators in effect size and operating independently of users’ political orientation. Crucially, this persuasive effect stems not from displacing authority but from restructuring cognitive heuristics underlying judgment formation. These findings reveal AI’s capacity to exert *directive influence* over knowledge evaluation, offering critical empirical support for designing platform-level credibility mechanisms that balance algorithmic guidance with user epistemic autonomy.
📝 Abstract
AI-generated content is rapidly becoming a salient component of online information ecosystems, yet its influence on public trust and epistemic judgments remains poorly understood. We present a large-scale mixed-design experiment (N = 1,000) investigating how AI-generated credibility scores affect user perception of political news. Our results reveal that AI feedback significantly moderates partisan bias and institutional distrust, surpassing traditional engagement signals such as likes and shares. These findings demonstrate the persuasive power of generative AI and suggest a need for design strategies that balance epistemic influence with user autonomy.