SafeToolBench: Pioneering a Prospective Benchmark to Evaluating Tool Utilization Safety in LLMs

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) pose irreversible safety risks—such as financial loss and privacy breaches—when autonomously invoking external tools. Method: We introduce SafeToolBench, the first proactive tool safety benchmark, featuring a nine-dimensional safety evaluation framework covering user instructions, tool behaviors, and their interactions. We further propose SafeInstructTool, a framework integrating adversarial instruction simulation with diverse real-world tool testing to enhance LLMs’ self-awareness of potential harms. Contribution/Results: Empirical evaluation reveals pervasive tool-call safety blind spots across mainstream LLMs. SafeInstructTool significantly improves risk detection rates, establishing a quantifiable, scalable safety evaluation paradigm and actionable defense pathway for trustworthy tool integration.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have exhibited great performance in autonomously calling various tools in external environments, leading to better problem solving and task automation capabilities. However, these external tools also amplify potential risks such as financial loss or privacy leakage with ambiguous or malicious user instructions. Compared to previous studies, which mainly assess the safety awareness of LLMs after obtaining the tool execution results (i.e., retrospective evaluation), this paper focuses on prospective ways to assess the safety of LLM tool utilization, aiming to avoid irreversible harm caused by directly executing tools. To this end, we propose SafeToolBench, the first benchmark to comprehensively assess tool utilization security in a prospective manner, covering malicious user instructions and diverse practical toolsets. Additionally, we propose a novel framework, SafeInstructTool, which aims to enhance LLMs' awareness of tool utilization security from three perspectives (i.e., extit{User Instruction, Tool Itself, and Joint Instruction-Tool}), leading to nine detailed dimensions in total. We experiment with four LLMs using different methods, revealing that existing approaches fail to capture all risks in tool utilization. In contrast, our framework significantly enhances LLMs' self-awareness, enabling a more safe and trustworthy tool utilization.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM tool safety prospectively to prevent harm
Evaluating risks from malicious instructions and diverse tools
Enhancing LLM security awareness across multiple dimensions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prospective benchmark for LLM tool safety
SafeInstructTool framework with nine dimensions
Enhanced safety awareness across instruction-tool interactions
🔎 Similar Papers
No similar papers found.
H
Hongfei Xia
Beijing Institute of Technology
H
Hongru Wang
The Chinese University of Hong Kong
Z
Zeming Liu
Beihang University
Qian Yu
Qian Yu
Professor, Dept of Earth, Geographic, and Climate Sciences, University of Massachusetts-Amherst
GISremote sensingSpatial modeling
Y
Yuhang Guo
Beijing Institute of Technology
H
Haifeng Wang
Baidu