🤖 AI Summary
This study addresses the critical security vulnerability of large language model (LLM)-driven web agents when exposed to obfuscated malicious URLs. It presents the first systematic characterization and quantification of LLM susceptibility to such attacks, introducing MalURLBench—the first benchmark for evaluating malicious URL detection, comprising 10 real-world scenarios, 7 categories of malicious websites, and 61,845 attack samples. Building upon this benchmark, the authors propose URLGuard, a lightweight defense module designed to enhance LLM resilience against disguised malicious URLs. Extensive evaluation across 12 mainstream LLMs demonstrates that current models generally fail to recognize such threats, whereas URLGuard significantly improves detection and mitigation capabilities, offering a crucial safeguard for deploying secure LLM-based web agents.
📝 Abstract
LLM-based web agents have become increasingly popular for their utility in daily life and work. However, they exhibit critical vulnerabilities when processing malicious URLs: accepting a disguised malicious URL enables subsequent access to unsafe webpages, which can cause severe damage to service providers and users. Despite this risk, no benchmark currently targets this emerging threat. To address this gap, we propose MalURLBench, the first benchmark for evaluating LLMs'vulnerabilities to malicious URLs. MalURLBench contains 61,845 attack instances spanning 10 real-world scenarios and 7 categories of real malicious websites. Experiments with 12 popular LLMs reveal that existing models struggle to detect elaborately disguised malicious URLs. We further identify and analyze key factors that impact attack success rates and propose URLGuard, a lightweight defense module. We believe this work will provide a foundational resource for advancing the security of web agents. Our code is available at https://github.com/JiangYingEr/MalURLBench.