TRUSTDESC: Preventing Tool Poisoning in LLM Applications via Trusted Description Generation

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) to tool poisoning attacks (TPAs)—particularly stealthy implicit variants—when invoking external tools. To mitigate this threat, the authors propose TRUSTDESC, a novel framework that automatically generates trustworthy and faithful descriptions directly from tool implementation code, thereby fundamentally blocking both explicit and implicit TPAs. TRUSTDESC integrates reachability-aware static analysis, LLM-guided code reduction, code snippet–based description generation, and dynamic behavior validation into an end-to-end, three-stage pipeline. Evaluation on 52 real-world tools demonstrates that TRUSTDESC substantially improves task completion rates, effectively defends against implicit TPAs, and incurs minimal computational and economic overhead.
📝 Abstract
Large language models (LLMs) increasingly rely on external tools to perform time-sensitive tasks and real-world actions. While tool integration expands LLM capabilities, it also introduces a new prompt-injection attack surface: tool poisoning attacks (TPAs). Attackers manipulate tool descriptions by embedding malicious instructions (explicit TPAs) or misleading claims (implicit TPAs) to influence model behavior and tool selection. Existing defenses mainly detect anomalous instructions and remain ineffective against implicit TPAs. In this paper, we present TRUSTDESC, the first framework for preventing tool poisoning by automatically generating trusted tool descriptions from implementations. TRUSTDESC derives implementation-faithful descriptions through a three-stage pipeline. SliceMin performs reachability-aware static analysis and LLM-guided debloating to extract minimal tool-relevant code slices. DescGen synthesizes descriptions from these slices while mitigating misleading or adversarial code artifacts. DynVer refines descriptions through dynamic verification by executing synthesized tasks and validating behavioral claims. We evaluate TRUSTDESC on 52 real-world tools across multiple tool ecosystems. Results show that TRUSTDESC produces accurate tool descriptions that improve task completion rates while mitigating implicit TPAs at their root, with minimal time and monetary overhead.
Problem

Research questions and friction points this paper is trying to address.

tool poisoning attacks
large language models
trusted descriptions
implicit TPAs
external tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

tool poisoning attacks
trusted description generation
static and dynamic analysis
LLM-guided debloating
implementation-faithful synthesis
🔎 Similar Papers
No similar papers found.
H
Hengkai Ye
The Pennsylvania State University
Z
Zhechang Zhang
The Pennsylvania State University
Jinyuan Jia
Jinyuan Jia
Assistant Professor, Penn State
AI Security
Hong Hu
Hong Hu
Pennsylvania State University
System SecuritySoftware Security