AesBiasBench: Evaluating Bias and Alignment in Multimodal Language Models for Personalized Image Aesthetic Assessment

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper systematically investigates identity bias in multimodal large language models (MLLMs) for personalized image aesthetic assessment and evaluates their alignment with human aesthetic preferences. To this end, we introduce AesBiasBench—the first benchmark dedicated to aesthetic bias evaluation—comprising three subtasks: aesthetic perception, aesthetic assessment, and aesthetic empathy. It is the first framework to jointly model stereotypical bias and human preference alignment. We propose structured metrics—including Identity Bias Strength (IFD), Normalized Response Difference (NRD), and Alignment Accuracy Score (AAS)—and integrate demographic subgroup analysis with ground-truth human preference data to quantify biases across gender, age, and education levels. Experimental evaluation across 19 state-of-the-art MLLMs reveals that smaller models exhibit significantly stronger identity bias; while larger models generally align better with human preferences, explicitly injecting identity information exacerbates emotional judgment bias.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) are increasingly applied in Personalized Image Aesthetic Assessment (PIAA) as a scalable alternative to expert evaluations. However, their predictions may reflect subtle biases influenced by demographic factors such as gender, age, and education. In this work, we propose AesBiasBench, a benchmark designed to evaluate MLLMs along two complementary dimensions: (1) stereotype bias, quantified by measuring variations in aesthetic evaluations across demographic groups; and (2) alignment between model outputs and genuine human aesthetic preferences. Our benchmark covers three subtasks (Aesthetic Perception, Assessment, Empathy) and introduces structured metrics (IFD, NRD, AAS) to assess both bias and alignment. We evaluate 19 MLLMs, including proprietary models (e.g., GPT-4o, Claude-3.5-Sonnet) and open-source models (e.g., InternVL-2.5, Qwen2.5-VL). Results indicate that smaller models exhibit stronger stereotype biases, whereas larger models align more closely with human preferences. Incorporating identity information often exacerbates bias, particularly in emotional judgments. These findings underscore the importance of identity-aware evaluation frameworks in subjective vision-language tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating bias in multimodal models for aesthetic assessment
Assessing alignment between model outputs and human preferences
Measuring demographic stereotype biases in aesthetic evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

AesBiasBench benchmark for bias evaluation
Structured metrics to assess stereotype bias
Identity-aware framework for multimodal models
🔎 Similar Papers
No similar papers found.
K
Kun Li
City University of Hong Kong
L
Lai-Man Po
City University of Hong Kong
H
Hongzheng Yang
The Chinese University of Hong Kong
Xuyuan Xu
Xuyuan Xu
City University of Hong Kong
Deep LearningLLMComputer Vision
K
Kangcheng Liu
Hunan University
Yuzhi Zhao
Yuzhi Zhao
Ph.D., City University of Hong Kong; B.Eng., Huazhong University of Science and Technology
Low-level VisionComputational PhotographyLLMMLLM