LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) exhibit systematic biases and fairness deficits in multilingual settings, yet lack dedicated evaluation benchmarks for multilingual fairness. To address this gap, we introduce LinguaMark—the first multilingual fairness-focused visual question answering (VQA) benchmark—covering 11 languages and five socially salient attributes. LinguaMark is constructed from multilingual image–text pairs and employs a hybrid automated–human evaluation framework, quantifying model performance along three dimensions: bias, answer relevancy, and faithfulness. Experimental results reveal that closed-source LMMs generally outperform open-source counterparts; Qwen2.5 demonstrates exceptional multilingual generalization, while several open-source models achieve competitive performance on specific social attributes. All benchmark data, annotation guidelines, and evaluation code are publicly released to foster reproducible, equitable multimodal research.

Technology Category

Application Category

📝 Abstract
Large Multimodal Models (LMMs) are typically trained on vast corpora of image-text data but are often limited in linguistic coverage, leading to biased and unfair outputs across languages. While prior work has explored multimodal evaluation, less emphasis has been placed on assessing multilingual capabilities. In this work, we introduce LinguaMark, a benchmark designed to evaluate state-of-the-art LMMs on a multilingual Visual Question Answering (VQA) task. Our dataset comprises 6,875 image-text pairs spanning 11 languages and five social attributes. We evaluate models using three key metrics: Bias, Answer Relevancy, and Faithfulness. Our findings reveal that closed-source models generally achieve the highest overall performance. Both closed-source (GPT-4o and Gemini2.5) and open-source models (Gemma3, Qwen2.5) perform competitively across social attributes, and Qwen2.5 demonstrates strong generalization across multiple languages. We release our benchmark and evaluation code to encourage reproducibility and further research.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual fairness in Large Multimodal Models (LMMs)
Assessing bias and performance in multilingual Visual Question Answering
Benchmarking LMMs across 11 languages and social attributes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual VQA benchmark for LMM evaluation
Evaluates models on Bias, Relevancy, Faithfulness
Includes 11 languages and social attributes
🔎 Similar Papers
No similar papers found.