🤖 AI Summary
This study investigates algorithm-driven political content entertainment-ification drift in YouTube Shorts’ recommendation system. Focusing on sensitive topics—including the South China Sea dispute and the 2024 Taiwan regional election—alongside general entertainment content, we develop a multidimensional analytical framework integrating textual semantics, sentiment polarity, and topic classification. Leveraging generative AI, we perform fine-grained annotation of 685,842 Shorts videos—the first large-scale, AI-assisted labeling effort for this domain—and model recommendation bias using engagement metrics including watch duration, like rate, and interaction density. Results reveal systematic suppression of politically sensitive content and strong preference for emotionally neutral/positive, high-view-count, and high-like-rate entertainment content, establishing an “emotion–popularity” coupled bias mechanism that induces thematic narrowing and diminished information diversity. This work provides empirical evidence and methodological innovation for algorithmic governance research on short-video platforms.
📝 Abstract
The rapid growth of YouTube Shorts, now serving over 2 billion monthly users, reflects a global shift toward short-form video as a dominant mode of online content consumption. This study investigates algorithmic bias in YouTube Shorts' recommendation system by analyzing how watch-time duration, topic sensitivity, and engagement metrics influence content visibility and drift. We focus on three content domains: the South China Sea dispute, the 2024 Taiwan presidential election, and general YouTube Shorts content. Using generative AI models, we classified 685,842 videos across relevance, topic category, and emotional tone. Our results reveal a consistent drift away from politically sensitive content toward entertainment-focused videos. Emotion analysis shows a systematic preference for joyful or neutral content, while engagement patterns indicate that highly viewed and liked videos are disproportionately promoted, reinforcing popularity bias. This work provides the first comprehensive analysis of algorithmic drift in YouTube Shorts based on textual content, emotional tone, topic categorization, and varying watch-time conditions. These findings offer new insights into how algorithmic design shapes content exposure, with implications for platform transparency and information diversity.