A Keyframe-Based Approach for Auditing Bias in YouTube Shorts Recommendations

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Short-video platforms (e.g., YouTube Shorts) employ opaque recommendation systems whose content distribution—particularly regarding politically sensitive topics—may introduce algorithmic bias and thematic drift, thereby shaping information exposure for billions of users. To address this, we propose a keyframe-based auditing methodology: we extract keyframes from recommended video chains, generate image captions, and map them into a shared multimodal embedding space; clustering in this space enables cross-chain visual-semantic comparison and interpretable visualization. This work is the first to integrate keyframe analysis with cross-chain visual-semantic mapping, enabling efficient, transparent identification of algorithmically induced thematic concentration, drift, and potential filtering behaviors. Experimental evaluation on political content reveals statistically significant thematic shifts across recommendation chains, demonstrating the method’s effectiveness and practical utility in detecting latent bias and content drift.

Technology Category

Application Category

📝 Abstract
YouTube Shorts and other short-form video platforms now influence how billions engage with content, yet their recommendation systems remain largely opaque. Small shifts in promoted content can significantly impact user exposure, especially for politically sensitive topics. In this work, we propose a keyframe-based method to audit bias and drift in short-form video recommendations. Rather than analyzing full videos or relying on metadata, we extract perceptually salient keyframes, generate captions, and embed both into a shared content space. Using visual mapping across recommendation chains, we observe consistent shifts and clustering patterns that indicate topic drift and potential filtering. Comparing politically sensitive topics with general YouTube categories, we find notable differences in recommendation behavior. Our findings show that keyframes provide an efficient and interpretable lens for understanding bias in short-form video algorithms.
Problem

Research questions and friction points this paper is trying to address.

Auditing bias in YouTube Shorts recommendation algorithms
Analyzing topic drift and filtering in short-form video platforms
Comparing recommendation behavior for politically sensitive content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Keyframe extraction for content analysis
Visual mapping across recommendation chains
Shared embedding space for bias detection
🔎 Similar Papers
No similar papers found.