ThumbnailTruth: A Multi-Modal LLM Approach for Detecting Misleading YouTube Thumbnails Across Diverse Cultural Settings

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-cultural misleading video thumbnails on platforms like YouTube severely undermine user trust and platform credibility. To address this, we propose the first multilingual, multicultural multimodal misinformation detection framework targeting eight languages/cultures. We introduce CultThumb, a novel cross-cultural thumbnail dataset comprising 2,843 videos, integrating thumbnail visual features, auto-generated video descriptions, and OCR-extracted subtitle text. Our method leverages prompt engineering to orchestrate collaborative reasoning among multimodal large language models (MLLMs), including GPT-4o and Claude 3.5 Sonnet. Experimental results show that Claude 3.5 Sonnet achieves 93.8% accuracy, 92.1% precision, and 94.3% recall—substantially outperforming baseline approaches. This work constitutes the first systematic validation of MLLMs’ capability to detect visual–textual inconsistency-based misinformation in cross-cultural contexts, establishing a scalable technical pathway for platform-level trustworthy content governance.

Technology Category

Application Category

📝 Abstract
Misleading video thumbnails on platforms like YouTube are a pervasive problem, undermining user trust and platform integrity. This paper proposes a novel multi-modal detection pipeline that uses Large Language Models (LLMs) to flag misleading thumbnails. We first construct a comprehensive dataset of 2,843 videos from eight countries, including 1,359 misleading thumbnail videos that collectively amassed over 7.6 billion views -- providing a unique cross-cultural perspective on this global issue. Our detection pipeline integrates video-to-text descriptions, thumbnail images, and subtitle transcripts to holistically analyze content and flag misleading thumbnails. Through extensive experimentation and prompt engineering, we evaluate the performance of state-of-the-art LLMs, including GPT-4o, GPT-4o Mini, Claude 3.5 Sonnet, and Gemini-1.5 Flash. Our findings show the effectiveness of LLMs in identifying misleading thumbnails, with Claude 3.5 Sonnet consistently showing strong performance, achieving an accuracy of 93.8%, precision over 92%, and recall exceeding 94% in certain scenarios. We discuss the implications of our findings for content moderation, user experience, and the ethical considerations of deploying such systems at scale. Our findings pave the way for more transparent, trustworthy video platforms and stronger content integrity for audiences worldwide.
Problem

Research questions and friction points this paper is trying to address.

Detecting misleading YouTube thumbnails across diverse cultural contexts
Addressing deceptive thumbnails that undermine user trust and platform integrity
Developing multi-modal LLM approach for cross-cultural misleading content detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal LLM pipeline for thumbnail analysis
Cross-cultural dataset with video-to-text integration
Claude 3.5 Sonnet achieves 93.8% accuracy
🔎 Similar Papers
No similar papers found.
W
Wajiha Naveed
Department of Computer Science, Lahore University of Management Sciences, Pakistan
Z
Zartash Afzal Uzmi
Department of Computer Science, Lahore University of Management Sciences, Pakistan
Zafar Ayyub Qazi
Zafar Ayyub Qazi
Associate Professor, LUMS
Networked Systems