Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLM-driven GUI agents pose significant privacy and security risks when handling sensitive data due to the absence of human oversight, yet existing evaluations overemphasize performance while neglecting trustworthiness. Method: We propose the first human-centered evaluation framework for GUI agents: (1) systematically identifying three agent-specific risk categories; (2) introducing a novel paradigm integrating risk assessment, context-aware informed consent mechanisms, and privacy- and security-by-design principles; and (3) establishing a five-dimensional evaluation challenge map—covering privacy, security, controllability, transparency, and accountability—via human-in-the-loop evaluation, risk-oriented metrics, and GUI behavioral audit modeling. Contribution: The framework is scalable, interpretable, and participatory, offering both theoretical foundations and practical guidelines for developing trustworthy GUI agents. It bridges critical gaps between technical capability and human-centric assurance in interactive AI systems.

Technology Category

Application Category

📝 Abstract
The rise of Large Language Models (LLMs) has revolutionized Graphical User Interface (GUI) automation through LLM-powered GUI agents, yet their ability to process sensitive data with limited human oversight raises significant privacy and security risks. This position paper identifies three key risks of GUI agents and examines how they differ from traditional GUI automation and general autonomous agents. Despite these risks, existing evaluations focus primarily on performance, leaving privacy and security assessments largely unexplored. We review current evaluation metrics for both GUI and general LLM agents and outline five key challenges in integrating human evaluators for GUI agent assessments. To address these gaps, we advocate for a human-centered evaluation framework that incorporates risk assessments, enhances user awareness through in-context consent, and embeds privacy and security considerations into GUI agent design and evaluation.
Problem

Research questions and friction points this paper is trying to address.

Assessing privacy and security risks in LLM-powered GUI agents
Integrating human evaluators into GUI agent assessments
Developing human-centered evaluation frameworks for trustworthy GUI agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centered evaluation framework for GUI agents
In-context consent enhances user awareness
Embed privacy and security in agent design
🔎 Similar Papers
No similar papers found.