🤖 AI Summary
This study investigates how Twitter’s (pre-rebranding as X) algorithmic redesign affected the visibility of politically aligned accounts, specifically examining the drivers of disproportionate exposure for right-leaning accounts.
Method: Leveraging user timeline data, we employ network analysis and causal inference techniques to compare content exposure in algorithmically curated feeds versus chronological timelines.
Contribution/Results: We find that right-leaning accounts’ heightened visibility is not attributable to political orientation per se, but rather to their tendency to post content with higher emotional intensity and greater incitement—and to receiving attention from platform “core users” (high-centrality accounts). Notably, legacy verified (blue-checkmark) accounts exhibit reduced exposure in algorithmic feeds. This work provides the first empirical evidence that platform owners’ attention allocation and content-level emotional valence—not political bias—are primary non-ideological determinants of algorithmic preference. These findings offer a foundational empirical basis for understanding how recommender systems shape online trust and safety ecosystems.
📝 Abstract
Algorithmic effects on social media platforms have come under recent scrutiny, with several works reporting that right-leaning accounts tend to receive more exposure. In this paper, we expand upon this body of work using data collected from user feeds after Twitter's change of ownership but before its re-branding to X. We replicate findings from prior work regarding the increased exposure of right-leaning accounts to wider audiences in algorithmically curated compared to reverse-chronological feeds, and, crucially, we further unpack this effect to understand what correlated (and did not correlate) with these differences. Our results reveal that right-leaning accounts benefited not necessarily due to their political affiliation, but possibly because they behaved in ways associated with algorithmic rewards; namely, posting more agitating content and receiving attention from the platform's owner, Elon Musk, who was the most central network account. We also demonstrate that legacy-verified accounts, like businesses and government officials, received less exposure in the algorithmic feed compared to non-verified or Twitter Blue-verified accounts. We discuss implications of these findings for the intersection between behavioral incentives for algorithmic reach and online trust and safety.