Personalized Large Language Models Can Increase the Belief Accuracy of Social Networks

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the causal impact of personalized large language models (LLMs) on belief accuracy and network structure evolution in contentious domains—specifically, the 2024 U.S. presidential election. Employing a pre-registered online experiment (N = 1,265), we integrate user-profile-driven prompt engineering, fact-aligned retrieval-augmented generation (RAG), and dynamic social network modeling. Results demonstrate that LLMs not only significantly improve individual belief accuracy but also function as “corrective agents,” prompting users to actively reconfigure their attention networks: 87% of participants chose to follow the personalized LLM, and connection density among high-accuracy users increased by 42%. The study establishes a dual mechanism—LLM-driven belief convergence and self-organized network optimization—thereby offering a novel paradigm for fostering resilient digital information ecosystems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly involved in shaping public understanding on contested issues. This has led to substantial discussion about the potential of LLMs to reinforce or correct misperceptions. While existing literature documents the impact of LLMs on individuals' beliefs, limited work explores how LLMs affect social networks. We address this gap with a pre-registered experiment (N = 1265) around the 2024 US presidential election, where we empirically explore the impact of personalized LLMs on belief accuracy in the context of social networks. The LLMs are constructed to be personalized, offering messages tailored to individuals' profiles, and to have guardrails for accurate information retrieval. We find that the presence of a personalized LLM leads individuals to update their beliefs towards the truth. More importantly, individuals with a personalized LLM in their social network not only choose to follow it, indicating they would like to obtain information from it in subsequent interactions, but also construct subsequent social networks to include other individuals with beliefs similar to the LLM -- in this case, more accurate beliefs. Therefore, our results show that LLMs have the capacity to influence individual beliefs and the social networks in which people exist, and highlight the potential of LLMs to act as corrective agents in online environments. Our findings can inform future strategies for responsible AI-mediated communication.
Problem

Research questions and friction points this paper is trying to address.

Impact of personalized LLMs on social network belief accuracy
LLMs' role in correcting misperceptions in online environments
Influence of LLMs on individual beliefs and network formation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized LLMs tailored to individual profiles
Guardrails for accurate information retrieval
LLMs influence beliefs and social networks
🔎 Similar Papers
No similar papers found.