Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs

šŸ“… 2025-05-21
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
This paper addresses the poor cultural knowledge consistency of multimodal large language models (MLLMs) in cross-lingual settings. To this end, we propose the first dual-dimensional cross-lingual consistency evaluation framework and introduce two novel benchmarks: KnowRecall—assessing factual knowledge consistency across 15 languages—and VisRecall—evaluating visual memory consistency across 9 languages—both centered on culturally grounded questions about global landmarks and image-free visual recall tasks. Methodologically, our framework integrates multilingual visual question answering (VQA), zero-shot visual description generation, and cross-lingual knowledge alignment assessment, augmented by human verification and consistency scoring. Experiments reveal that state-of-the-art MLLMs—including proprietary models—achieve only 58% average cross-lingual consistency, exposing critical limitations in cultural awareness and multilingual multimodal co-representation. The proposed benchmarks offer reproducible, fine-grained evaluation tools and concrete directions for advancing robust, culturally consistent MLLMs.

Technology Category

Application Category

šŸ“ Abstract
The rapid evolution of multimodal large language models (MLLMs) has significantly enhanced their real-world applications. However, achieving consistent performance across languages, especially when integrating cultural knowledge, remains a significant challenge. To better assess this issue, we introduce two new benchmarks: KnowRecall and VisRecall, which evaluate cross-lingual consistency in MLLMs. KnowRecall is a visual question answering benchmark designed to measure factual knowledge consistency in 15 languages, focusing on cultural and historical questions about global landmarks. VisRecall assesses visual memory consistency by asking models to describe landmark appearances in 9 languages without access to images. Experimental results reveal that state-of-the-art MLLMs, including proprietary ones, still struggle to achieve cross-lingual consistency. This underscores the need for more robust approaches that produce truly multilingual and culturally aware models.
Problem

Research questions and friction points this paper is trying to address.

Assessing cross-lingual consistency in multimodal LLMs
Evaluating cultural knowledge integration across languages
Benchmarking visual and factual recall in multilingual contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing KnowRecall for factual knowledge consistency
Developing VisRecall for visual memory consistency
Benchmarking cross-lingual consistency in 15 languages
šŸ”Ž Similar Papers
No similar papers found.