Analyzing Character Representation in Media Content using Multimodal Foundation Model: Effectiveness and Trust

📅 2025-06-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the perceived usefulness and credibility of AI-generated character representations in media for general audiences. To address this, we leverage the CLIP multimodal foundation model to perform frame-level character detection and infer age and gender attributes; we then design an interactive, web-based visualization tool tailored for non-expert users. A structured user study empirically evaluates usability and trust. Our work is the first to integrate foundation-model-driven representational analysis, human-centered visualization design, and systematic trust measurement—thereby bridging a critical gap in “human-in-the-loop validation” for AI-mediated media analysis. Results indicate that participants accurately interpret the visualizations and affirm their overall utility; however, they exhibit low confidence in the AI’s inferred demographic attributes (age/gender). Participants further emphasize the need to expand representation across demographic dimensions (e.g., ethnicity, disability) and incorporate context-sensitive modeling to improve fairness and reliability.

Technology Category

Application Category

📝 Abstract
Recent advances in AI has enabled automated analysis of complex media content at scale and generate actionable insights regarding character representation along such dimensions as gender and age. Past work focused on quantifying representation from audio/video/text using various ML models, but without having the audience in the loop. We ask, even if character distribution along demographic dimensions are available, how useful are they to the general public? Do they actually trust the numbers generated by AI models? Our work addresses these questions through a user study, while proposing a new AI-based character representation and visualization tool. Our tool based on the Contrastive Language Image Pretraining (CLIP) foundation model to analyze visual screen data to quantify character representation across dimensions of age and gender. We also designed effective visualizations suitable for presenting such analytics to lay audience. Next, we conducted a user study to seek empirical evidence on the usefulness and trustworthiness of the AI-generated results for carefully chosen movies presented in the form of our visualizations. We note that participants were able to understand the analytics from our visualization, and deemed the tool `overall useful'. Participants also indicated a need for more detailed visualizations to include more demographic categories and contextual information of the characters. Participants' trust in AI-based gender and age models is seen to be moderate to low, although they were not against the use of AI in this context. Our tool including code, benchmarking, and data from the user study can be found here: https://anonymous.4open.science/r/Character-Representation-Media-FF7B
Problem

Research questions and friction points this paper is trying to address.

Evaluating public trust in AI-generated character demographic analytics
Assessing usefulness of automated representation tools for general audiences
Developing multimodal AI system for character age/gender analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

CLIP foundation model for visual character analytics
Interactive visualization for audience understanding
User study evaluating trust in AI results
🔎 Similar Papers
No similar papers found.
E
Evdoxia Taka
University of Glasgow
D
Debadyuti Bhattacharya
University of Glasgow
J
Joanne Garde-Hansen
University of Leeds
S
Sanjay Sharma
University of Warwick
Tanaya Guha
Tanaya Guha
Associate Professor, University of Glasgow
SpeechVisionBehaviourAI