🤖 AI Summary
This work addresses critical security risks—including system disruption and privacy leakage—posed by vision-language model (VLM)-driven mobile GUI agents during real-world execution. To this end, we introduce MobileRisk-Live, the first safety evaluation benchmark tailored for mobile agents, comprising a dynamic sandbox environment and fine-grained operation trajectory annotations. We propose a dual-engine hybrid detection framework that jointly leverages formal verification (to ensure system-level behavioral compliance) and VLM-based semantic understanding (to detect context-sensitive risks). Evaluated across multiple metrics—including risk identification accuracy, false positive rate, and cross-task generalizability—our approach outperforms existing methods by 10–30%. This work establishes a verifiable, interpretable, and scalable technical foundation for developing safe and reliable autonomous mobile agents, supported by rigorous empirical validation.
📝 Abstract
Computer-using agents powered by Vision-Language Models (VLMs) have demonstrated human-like capabilities in operating digital environments like mobile platforms. While these agents hold great promise for advancing digital automation, their potential for unsafe operations, such as system compromise and privacy leakage, is raising significant concerns. Detecting these safety concerns across the vast and complex operational space of mobile environments presents a formidable challenge that remains critically underexplored. To establish a foundation for mobile agent safety research, we introduce MobileRisk-Live, a dynamic sandbox environment accompanied by a safety detection benchmark comprising realistic trajectories with fine-grained annotations. Built upon this, we propose OS-Sentinel, a novel hybrid safety detection framework that synergistically combines a Formal Verifier for detecting explicit system-level violations with a VLM-based Contextual Judge for assessing contextual risks and agent actions. Experiments show that OS-Sentinel achieves 10%-30% improvements over existing approaches across multiple metrics. Further analysis provides critical insights that foster the development of safer and more reliable autonomous mobile agents.