🤖 AI Summary
This study uncovers systematic algorithmic biases in YouTube’s Shorts versus long-form video recommendation systems regarding politically sensitive topics (e.g., the South China Sea dispute). To overcome API limitations, we design a parallel, cross-format data collection framework enabling large-scale, behavior-level comparative analysis of recommendation streams. We propose the first joint bias analysis model for short and long videos, revealing that Shorts’ algorithm strongly prioritizes high-engagement yet low-diversity content—amplifying unilateral narratives and intensifying echo chambers on sensitive issues. Empirical findings demonstrate structural differences in recommendation mechanisms across formats: Shorts leverages real-time feedback loops to privilege emotionally charged and ideologically polarized content, whereas long-form recommendations exhibit comparatively greater tolerance for pluralistic viewpoints. Our work provides novel empirical evidence on how algorithmic design shapes public discourse and offers both theoretical foundations and practical guidelines for developing fairer recommender systems that balance engagement with viewpoint diversity.
📝 Abstract
The growing popularity of short-form video content, such as YouTube Shorts, has transformed user engagement on digital platforms, raising critical questions about the role of recommendation algorithms in shaping user experiences. These algorithms significantly influence content consumption, yet concerns about biases, echo chambers, and content diversity persist. This study develops an efficient data collection framework to analyze YouTube's recommendation algorithms for both short-form and long-form videos, employing parallel computing and advanced scraping techniques to overcome limitations of YouTube's API. The analysis uncovers distinct behavioral patterns in recommendation algorithms across the two formats, with short-form videos showing a more immediate shift toward engaging yet less diverse content compared to long-form videos. Furthermore, a novel investigation into biases in politically sensitive topics, such as the South China Sea dispute, highlights the role of these algorithms in shaping narratives and amplifying specific viewpoints. By providing actionable insights for designing equitable and transparent recommendation systems, this research underscores the importance of responsible AI practices in the evolving digital media landscape.