Can Large Language Models Grasp Concepts in Visual Content? A Case Study on YouTube Shorts about Depression

📅 2025-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates multimodal large language models’ (MLLMs) capacity to comprehend abstract psychological concepts—such as self-disclosure—in YouTube短视频 depicting depression. Method: Leveraging 725 annotated keyframes, we conduct a systematic human–AI semantic interpretation comparison, examining how conceptual operationalization granularity, semantic complexity, and video modality diversity affect human–AI semantic alignment. Using LLaVA-1.6-Mistral-7B, we integrate qualitative interpretability analysis with cross-modal concept alignment evaluation. Contribution/Results: We propose a tailored prompting strategy for abstract psychological concepts and a human-centered multimodal evaluation paradigm. Contrary to intuition, excessive operational granularity degrades alignment; we identify critical dimensions governing consistency. Our work delivers a reproducible methodological framework and practical guidelines for AI-driven video content analysis in computational social science.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used to assist computational social science research. While prior efforts have focused on text, the potential of leveraging multimodal LLMs (MLLMs) for online video studies remains underexplored. We conduct one of the first case studies on MLLM-assisted video content analysis, comparing AI's interpretations to human understanding of abstract concepts. We leverage LLaVA-1.6 Mistral 7B to interpret four abstract concepts regarding video-mediated self-disclosure, analyzing 725 keyframes from 142 depression-related YouTube short videos. We perform a qualitative analysis of MLLM's self-generated explanations and found that the degree of operationalization can influence MLLM's interpretations. Interestingly, greater detail does not necessarily increase human-AI alignment. We also identify other factors affecting AI alignment with human understanding, such as concept complexity and versatility of video genres. Our exploratory study highlights the need to customize prompts for specific concepts and calls for researchers to incorporate more human-centered evaluations when working with AI systems in a multimodal context.
Problem

Research questions and friction points this paper is trying to address.

Exploring MLLMs for video content analysis in social science.
Comparing AI and human understanding of abstract concepts.
Identifying factors affecting AI-human alignment in video interpretation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes LLaVA-1.6 Mistral 7B for video analysis
Analyzes 725 keyframes from YouTube Shorts
Explores human-AI alignment in concept interpretation
🔎 Similar Papers
No similar papers found.
J
Jiaying Lizzy Liu
School of Information, The University of Texas at Austin, Austin, Texas, USA
Yiheng Su
Yiheng Su
University of Texas at Austin
P
Praneel Seth
Computer Science Department, The University of Texas at Austin, Austin, Texas, USA