MPR-GUI: Benchmarking and Enhancing Multilingual Perception and Reasoning in GUI Agents

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large vision-language models (LVLMs) face two key bottlenecks in multilingual GUI tasks: poor cross-lingual generalization and insufficient fine-grained perception and reasoning (P&R) of interface elements’ functionality and spatial relationships. To address this, we introduce MPR-GUI-Bench—the first fine-grained multilingual GUI benchmark—systematically exposing performance disparities across languages. Building on this, we propose GUI-XLI, a cross-lingual latent-space intervention method that identifies layer-specific representation shifts in LVLM hidden states and selectively calibrates P&R-related layers to align cross-lingual semantics. Experiments demonstrate that GUI-XLI achieves an average 6.5% improvement in multilingual P&R performance on MPR-GUI-Bench, substantially narrowing the performance gap between non-English and English settings. Our work establishes a scalable, architecture-agnostic paradigm for enhancing LVLMs’ cross-lingual capability in GUI understanding, advancing the development of globally deployable GUI agents.

Technology Category

Application Category

📝 Abstract
With the advancement of computational resources, Large Vision-Language Models (LVLMs) exhibit impressive Perception and Reasoning (P&R) performance on Graphical User Interface (GUI) tasks. However, although they demonstrate strong P&R capabilities in English GUI scenarios, their performance in multilingual settings has received little attention, which limits their global applications. Moreover, existing studies on GUI tasks lack fine-grained analyses, including widget functions and elements' spatial relationships, which are fundamental for more targeted improvements. To tackle these issues, we propose MPR-GUI-Bench, a Multilingual fine-grained Perception and Reasoning GUI Benchmark to evaluate GUI agents' P&R capabilities. Evaluation results demonstrate that LVLMs exhibit significantly worse P&R performance in non-English languages than in English. To address these gaps, we propose GUI-XLI, a GUI Cross-Lingual Intervention method that applies interventions to the hidden states at P&R capability-related layers to mitigate the gaps between English and other languages, building on previous research showing that the hidden states of different language inputs exhibit significant differences in the latent space. Experimental results indicate that our method improves GUI agents' multilingual P&R capability by 6.5% on average.
Problem

Research questions and friction points this paper is trying to address.

Evaluates multilingual GUI perception and reasoning gaps
Addresses lack of fine-grained GUI task analysis
Proposes method to improve non-English GUI performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual fine-grained GUI benchmark for evaluation
GUI Cross-Lingual Intervention method on hidden states
Improves multilingual GUI perception and reasoning by 6.5%