🤖 AI Summary
Recommender systems in social networks often exacerbate misinformation propagation by over-prioritizing user engagement. Method: This paper proposes a closed-loop control framework that jointly optimizes information veracity and user engagement. It extends the Friedkin-Johnsen model by incorporating dynamic penalty mechanisms targeting features exploitable by misinformation—such as extreme negative sentiment and content novelty—and integrates both model-agnostic and model-based strategies, leveraging large language models to extract sentiment features. Evaluation is conducted via simulation on the LIAR2 dataset. Contribution/Results: We first demonstrate that suppressing misinformation in networks with radical users can simultaneously increase median-user engagement—indicating that content governance enhances discussion quality among non-radical users. Experiments show up to 76% reduction in misinformation spread; in certain scenarios, user engagement rises rather than declines, achieving a synergistic balance between regulatory efficacy and platform vitality.
📝 Abstract
Modern social networks rely on recommender systems that inadvertently amplify misinformation by prioritizing engagement over content veracity. We present a control framework that mitigates misinformation spread while maintaining user engagement by penalizing content characteristics commonly exploited by false information, specifically, extreme negative sentiment and novelty. We extend the closed-loop Friedkin-Johnsen model to incorporate the mitigation of misinformation together with the maximization of user engagement. Both model-free and model-based control strategies demonstrate up to 76% reduction in misinformation propagation across diverse network configurations, validated through simulations using the LIAR2 dataset with sentiment features extracted via large language models. Analysis of engagement-misinformation trade-offs reveals that in networks with radical users, median engagement improves even as misinformation decreases, suggesting content moderation enhances discourse quality for non-extremist users. The framework provides practical guidance for platform operators in balancing misinformation suppression with engagement objectives.