🤖 AI Summary
Short-video platforms (e.g., YouTube Shorts) employ opaque recommendation systems whose content distribution—particularly regarding politically sensitive topics—may introduce algorithmic bias and thematic drift, thereby shaping information exposure for billions of users. To address this, we propose a keyframe-based auditing methodology: we extract keyframes from recommended video chains, generate image captions, and map them into a shared multimodal embedding space; clustering in this space enables cross-chain visual-semantic comparison and interpretable visualization. This work is the first to integrate keyframe analysis with cross-chain visual-semantic mapping, enabling efficient, transparent identification of algorithmically induced thematic concentration, drift, and potential filtering behaviors. Experimental evaluation on political content reveals statistically significant thematic shifts across recommendation chains, demonstrating the method’s effectiveness and practical utility in detecting latent bias and content drift.
📝 Abstract
YouTube Shorts and other short-form video platforms now influence how billions engage with content, yet their recommendation systems remain largely opaque. Small shifts in promoted content can significantly impact user exposure, especially for politically sensitive topics. In this work, we propose a keyframe-based method to audit bias and drift in short-form video recommendations. Rather than analyzing full videos or relying on metadata, we extract perceptually salient keyframes, generate captions, and embed both into a shared content space. Using visual mapping across recommendation chains, we observe consistent shifts and clustering patterns that indicate topic drift and potential filtering. Comparing politically sensitive topics with general YouTube categories, we find notable differences in recommendation behavior. Our findings show that keyframes provide an efficient and interpretable lens for understanding bias in short-form video algorithms.