🤖 AI Summary
This study investigates how coordinated inauthentic behavior on social media influenced information veracity during the early phase of the 2023 Israel–Hamas war. Analyzing 4.5 million posts from X (formerly Twitter) through a multimodal framework—integrating textual, visual, diffusion-pattern, and sentiment analyses—the authors identified 11 coordinated networks comprising 541 accounts. The findings reveal that such coordination predominantly employed low-complexity strategies, with misleading content concentrated within a minority of these groups. Notably, content veracity, toxicity, and emotional valence exhibited no significant intercorrelation, underscoring the necessity of jointly modeling coordination structures and content characteristics for effective intervention. Further validation demonstrates that targeted suspension of highly prolific purveyors of misinformation substantially curtails the spread of false narratives, whereas restricting only high-amplification accounts yields limited impact.
📝 Abstract
Coordinated campaigns on social media play a critical role in shaping crisis information environments, particularly during the onset of conflicts when uncertainty is high and verified information is scarce. We study the interplay between coordinated campaigns and information integrity through a case study of the 2023 Israel-Hamas War on Twitter (X). We analyze 4.5~million tweets and employ established coordination detection methods to identify 11 coordinated groups involving 541 accounts. We characterize these groups through a multimodal analysis that includes topics, account amplification, toxicity, emotional tone, visual themes, and misleading claims. Our analysis reveal that coordinated campaigns rely predominantly on low-complexity tactics, such as retweet amplification and copy-paste diffusion, and promote distinct narratives consistent with a fragmented manipulation landscape, without centralized control. Widely amplified misleading claims concentrate within just three of the identified coordinated groups; the remaining groups primarily engage in advocacy, religious solidarity, or humanitarian mobilization. Claim-level integrity, toxicity, and emotional signals are mutually uncorrelated: no single behavioral signal is a reliable proxy for the others. Targeting the most prolific spreaders of misleading content for moderation would be effective in reducing such content. However, targeting prolific amplifiers in general would not achieve the same mitigation effect. These findings suggest that evaluating coordination structures jointly with their specific content footprints is needed to effectively prioritize moderation interventions.