ChainFuzzer: Greybox Fuzzing for Workflow-Level Multi-Tool Vulnerabilities in LLM Agents

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of long-range vulnerabilities in multi-tool LLM agents, which arise from complex cross-tool data flows and often evade detection by single-tool testing. To tackle this, the authors propose ChainFuzzer—the first gray-box fuzzing framework tailored for workflow-level multi-tool collaboration. ChainFuzzer innovatively integrates toolchain dependency analysis, trajectory-guided prompt synthesis (TPS), and guard-aware fuzzing, complemented by a novel data-flow evidence–based oracle mechanism for vulnerability validation. Evaluated on 20 open-source LLM agents, ChainFuzzer uncovered 365 reproducible vulnerabilities—302 of which require multi-tool interaction—with a toolchain extraction precision of 96.49%. TPS boosts chain reachability to 95.45%, achieving a vulnerability discovery efficiency of 3.02 bugs per million tokens.

Technology Category

Application Category

📝 Abstract
Tool-augmented LLM agents increasingly rely on multi-step, multi-tool workflows to complete real tasks. This design expands the attack surface, because data produced by one tool can be persisted and later reused as input to another tool, enabling exploitable source-to-sink dataflows that only emerge through tool composition. We study this risk as multi-tool vulnerabilities in LLM agents, and show that existing discovery efforts focused on single-tool or single-hop testing miss these long-horizon behaviors and provide limited debugging value. We present ChainFuzzer, a greybox framework for discovering and reproducing multi-tool vulnerabilities with auditable evidence. ChainFuzzer (i) identifies high-impact operations with strict source-to-sink dataflow evidence and extracts plausible upstream candidate tool chains based on cross-tool dependencies, (ii) uses Trace-guided Prompt Solving (TPS) to synthesize stable prompts that reliably drive the agent to execute target chains, and (iii) performs guardrail-aware fuzzing to reproduce vulnerabilities under LLM guardrails via payload mutation and sink-specific oracles. We evaluate ChainFuzzer on 20 popular open-source LLM agent apps (998 tools). ChainFuzzer extracts 2,388 candidate tool chains and synthesizes 2,213 stable prompts, confirming 365 unique, reproducible vulnerabilities across 19/20 apps (302 require multi-tool execution). Component evaluation shows tool-chain extraction achieves 96.49% edge precision and 91.50% strict chain precision; TPS increases chain reachability from 27.05% to 95.45%; guardrail-aware fuzzing boosts payload-level trigger rate from 18.20% to 88.60%. Overall, ChainFuzzer achieves 3.02 vulnerabilities per 1M tokens, providing a practical foundation for testing and hardening real-world multi-tool agent systems.
Problem

Research questions and friction points this paper is trying to address.

multi-tool vulnerabilities
LLM agents
workflow-level security
source-to-sink dataflow
tool composition
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-tool vulnerabilities
greybox fuzzing
tool-chain extraction
trace-guided prompt solving
guardrail-aware fuzzing
🔎 Similar Papers
No similar papers found.