🤖 AI Summary
This study investigates multimodal large language models’ (MLLMs) capacity to comprehend abstract psychological concepts—such as self-disclosure—in YouTube短视频 depicting depression. Method: Leveraging 725 annotated keyframes, we conduct a systematic human–AI semantic interpretation comparison, examining how conceptual operationalization granularity, semantic complexity, and video modality diversity affect human–AI semantic alignment. Using LLaVA-1.6-Mistral-7B, we integrate qualitative interpretability analysis with cross-modal concept alignment evaluation. Contribution/Results: We propose a tailored prompting strategy for abstract psychological concepts and a human-centered multimodal evaluation paradigm. Contrary to intuition, excessive operational granularity degrades alignment; we identify critical dimensions governing consistency. Our work delivers a reproducible methodological framework and practical guidelines for AI-driven video content analysis in computational social science.
📝 Abstract
Large language models (LLMs) are increasingly used to assist computational social science research. While prior efforts have focused on text, the potential of leveraging multimodal LLMs (MLLMs) for online video studies remains underexplored. We conduct one of the first case studies on MLLM-assisted video content analysis, comparing AI's interpretations to human understanding of abstract concepts. We leverage LLaVA-1.6 Mistral 7B to interpret four abstract concepts regarding video-mediated self-disclosure, analyzing 725 keyframes from 142 depression-related YouTube short videos. We perform a qualitative analysis of MLLM's self-generated explanations and found that the degree of operationalization can influence MLLM's interpretations. Interestingly, greater detail does not necessarily increase human-AI alignment. We also identify other factors affecting AI alignment with human understanding, such as concept complexity and versatility of video genres. Our exploratory study highlights the need to customize prompts for specific concepts and calls for researchers to incorporate more human-centered evaluations when working with AI systems in a multimodal context.