Toxicity in State Sponsored Information Operations

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how state-sponsored information operations (IOs) strategically deploy toxic language on social media—examining patterns, mechanisms, and effects. Analyzing 56 million tweets from 18 geopolitical entities on X/Twitter, we conduct the first large-scale, cross-lingual quantitative analysis of six toxicity categories using the Perspective API and text mining techniques. Results show that toxic content constitutes only 1.53% of all posts yet generates significantly higher user engagement; notably, Russian-associated IOs exhibit markedly elevated interaction rates for toxic posts, underscoring deliberate strategic deployment. The findings reveal a “low-frequency, high-impact” propagation logic for toxicity in IOs—where infrequent but carefully crafted toxic messages amplify reach and resonance. This work fills a critical gap in the literature by empirically characterizing affective and rhetorical strategies in state-funded information manipulation. All code is publicly available.

Technology Category

Application Category

📝 Abstract
State-sponsored information operations (IOs) increasingly influence global discourse on social media platforms, yet their emotional and rhetorical strategies remain inadequately characterized in scientific literature. This study presents the first comprehensive analysis of toxic language deployment within such campaigns, examining 56 million posts from over 42 thousand accounts linked to 18 distinct geopolitical entities on X/Twitter. Using Google's Perspective API, we systematically detect and quantify six categories of toxic content and analyze their distribution across national origins, linguistic structures, and engagement metrics, providing essential information regarding the underlying patterns of such operations. Our findings reveal that while toxic content constitutes only 1.53% of all posts, they are associated with disproportionately high engagement and appear to be strategically deployed in specific geopolitical contexts. Notably, toxic content originating from Russian influence operations receives significantly higher user engagement compared to influence operations from any other country in our dataset. Our code is available at https://github.com/shafin191/Toxic_IO.
Problem

Research questions and friction points this paper is trying to address.

Analyzing toxic language in state-sponsored social media operations
Quantifying toxic content distribution across geopolitical entities
Investigating engagement patterns of toxic posts in IO campaigns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes toxic language in state-sponsored information operations
Uses Google's Perspective API for toxicity detection
Examines 56 million posts from 42k accounts
🔎 Similar Papers
No similar papers found.