🤖 AI Summary
This study investigates how opaque, asymmetric visibility regulation—such as “shadow banning”—in social media recommendation algorithms systematically interferes with public information dissemination. Employing large-scale empirical analysis of over 40 million tweets related to the Ukraine conflict and the 2024 U.S. presidential election, the study integrates view-count modeling, cross-source visibility comparison, narrative frame coding, and rigorous statistical testing. Results reveal that external link content suffers uniform visibility suppression—reduced to approximately one-eighth of baseline levels. Crucially, visibility disparities are strongly conditioned by source identity (e.g., Kyiv Independent vs. RT.com; Trump vs. Harris), not by content stance or credibility—directly contradicting the algorithmic neutrality assumption. These findings provide novel, empirically grounded evidence challenging platform claims of impartial curation and establish a methodological foundation for transparency-oriented platform governance.
📝 Abstract
In recent years, the opaque design and the limited public understanding of social networks' recommendation algorithms have raised concerns about potential manipulation of information exposure. While reducing content visibility, aka shadow banning, may help limit harmful content, it can also be used to suppress dissenting voices. This prompts the need for greater transparency and a better understanding of this practice. In this paper, we investigate the presence of visibility alterations through a large-scale quantitative analysis of two Twitter/X datasets comprising over 40 million tweets from more than 9 million users, focused on discussions surrounding the Ukraine-Russia conflict and the 2024 US Presidential Elections. We use view counts to detect patterns of reduced or inflated visibility and examine how these correlate with user opinions, social roles, and narrative framings. Our analysis shows that the algorithm systematically penalizes tweets containing links to external resources, reducing their visibility by up to a factor of eight, regardless of the ideological stance or source reliability. Rather, content visibility may be penalized or favored depending on the specific accounts producing it, as observed when comparing tweets from the Kyiv Independent and RT.com or tweets by Donald Trump and Kamala Harris. Overall, our work highlights the importance of transparency in content moderation and recommendation systems in protecting the integrity of public discourse and ensuring equitable access to online platforms.