Evaluating AI-Generated Images of Cultural Artifacts with Community-Informed Rubrics

๐Ÿ“… 2026-04-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the frequent neglect of local cultural perspectives in existing automated evaluations of AI-generated images, particularly regarding โ€œcultural appropriateness.โ€ It introduces a novel evaluation framework that deeply integrates diverse community participation from the outset, collaborating with blind and visually impaired individuals in the UK and residents of Kerala and Tamil Nadu in India to systematically translate lived cultural experiences and community concerns into actionable assessment dimensions. Leveraging multimodal large language models as judges (LLM-as-a-judge), the approach operationalizes community consensus into structured scoring rules, enabling automated evaluation of cultural appropriateness. The work not only establishes a conceptual framework grounded in community values and demonstrates its feasibility but also exposes critical limitations in current AI modelsโ€™ understanding of cultural context.
๐Ÿ“ Abstract
Measurement is essential to improving AI performance and mitigating harms for marginalized groups. As generative AI systems are rapidly deployed across geographies and contexts, AI measurement practices must be designed to support repeatable, automatable application across different models, datasets, and evaluation settings. But the drive to automate measurement can be in tension with the ability for measurement instruments to capture the expertise and perspectives of communities impacted by AI. Recent work advocates for breaking measurement into several key stages: first moving from an abstract concept to be measured into a precise, "systematized" concept; next operationalizing the systematized concept into a concrete measurement instrument; and finally applying the measurement instrument on data to produce measurements. This opens up an opportunity to concentrate community engagement in the systematization phase before operationalizing and applying measurement instruments. In this paper, we explore how to involve communities in systematizing the concept of "cultural appropriateness" in text-to-image models' representation of culturally significant artifacts through case studies with three communities: blind and low vision individuals residing in the UK, residents of Kerala, and residents of Tamil Nadu. Our systematized concepts reflect community members' lived experiences interacting with each artifact and how they want their material culture to be depicted, demonstrating the value of community involvement in defining valid measures. We explore how these systematized concepts can be operationalized into automated measurement instruments that could be applied using a multimodal LLM-as-a-judge approach and challenges that remain. We reflect on the benefits and limitations of such approaches.
Problem

Research questions and friction points this paper is trying to address.

cultural appropriateness
AI-generated images
community-informed evaluation
measurement systematization
cultural artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

community-informed evaluation
systematization
cultural appropriateness
generative AI
LLM-as-a-judge
๐Ÿ”Ž Similar Papers
No similar papers found.