MLA-Trust: Benchmarking Trustworthiness of Multimodal LLM Agents in GUI Environments

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language model agents (MLAs) face critical trustworthiness challenges—authenticity, controllability, safety, and privacy—when performing high-risk interactions in GUI environments. Method: This work introduces the first comprehensive trustworthiness evaluation framework for MLAs operating on real-world web pages and mobile apps. It formally defines and quantifies four-dimensional trust metrics, constructs a benchmark comprising 34 high-risk tasks, and integrates multimodal adversarial injection, behavioral trajectory auditing, and risk attribution analysis. Contributions/Results: We uncover a nonlinear risk accumulation mechanism induced by multi-step interactions; demonstrate that interactive MLAs exhibit significantly degraded trustworthiness versus static multimodal LLMs (e.g., a 3.2× increase in harmful content generation); identify novel trust vulnerabilities across 13 state-of-the-art MLAs; and open-source an extensible evaluation toolkit enabling cross-platform, continuous assessment.

Technology Category

Application Category

📝 Abstract
The emergence of multimodal LLM-based agents (MLAs) has transformed interaction paradigms by seamlessly integrating vision, language, action and dynamic environments, enabling unprecedented autonomous capabilities across GUI applications ranging from web automation to mobile systems. However, MLAs introduce critical trustworthiness challenges that extend far beyond traditional language models' limitations, as they can directly modify digital states and trigger irreversible real-world consequences. Existing benchmarks inadequately tackle these unique challenges posed by MLAs' actionable outputs, long-horizon uncertainty and multimodal attack vectors. In this paper, we introduce MLA-Trust, the first comprehensive and unified framework that evaluates the MLA trustworthiness across four principled dimensions: truthfulness, controllability, safety and privacy. We utilize websites and mobile applications as realistic testbeds, designing 34 high-risk interactive tasks and curating rich evaluation datasets. Large-scale experiments involving 13 state-of-the-art agents reveal previously unexplored trustworthiness vulnerabilities unique to multimodal interactive scenarios. For instance, proprietary and open-source GUI-interacting MLAs pose more severe trustworthiness risks than static MLLMs, particularly in high-stakes domains; the transition from static MLLMs into interactive MLAs considerably compromises trustworthiness, enabling harmful content generation in multi-step interactions that standalone MLLMs would typically prevent; multi-step execution, while enhancing the adaptability of MLAs, involves latent nonlinear risk accumulation across successive interactions, circumventing existing safeguards and resulting in unpredictable derived risks. Moreover, we present an extensible toolbox to facilitate continuous evaluation of MLA trustworthiness across diverse interactive environments.
Problem

Research questions and friction points this paper is trying to address.

Assessing trustworthiness risks in multimodal LLM agents interacting with GUI environments
Evaluating actionable outputs and long-horizon uncertainty in MLAs
Identifying vulnerabilities in multi-step execution and risk accumulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLA-Trust framework evaluates multimodal LLM trustworthiness
Interactive tasks test truthfulness, safety, controllability, privacy
Toolbox enables continuous trustworthiness assessment in diverse environments
🔎 Similar Papers
No similar papers found.
X
Xiao Yang
Department of Computer Science & Technology, Institute for AI, BNRist Center, THBI Lab, Tsinghua-Bosch Joint Center for ML, Tsinghua University, Beijing, China
J
Jiawei Chen
Shanghai Key Lab. of Multidimensional Info. Processing, East China Normal University, Shanghai, China
J
Jun Luo
Department of Computer Science & Technology, Institute for AI, BNRist Center, THBI Lab, Tsinghua-Bosch Joint Center for ML, Tsinghua University, Beijing, China
Zhengwei Fang
Zhengwei Fang
Beijing Jiaotong University
Adversarial RobustnessVision Language ModelComputer VisionUncertainty
Yinpeng Dong
Yinpeng Dong
Tsinghua University
Machine LearningDeep LearningAI Safety
H
Hang Su
Department of Computer Science & Technology, Institute for AI, BNRist Center, THBI Lab, Tsinghua-Bosch Joint Center for ML, Tsinghua University, Beijing, China
J
Jun Zhu
Department of Computer Science & Technology, Institute for AI, BNRist Center, THBI Lab, Tsinghua-Bosch Joint Center for ML, Tsinghua University, Beijing, China