Towards Verifiably Safe Tool Use for LLM Agents

📅 2026-01-12
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security risks posed by large language model (LLM) agents during tool invocation, such as inadvertent leakage of sensitive data or overwriting of critical records—hazards for which existing approaches lack verifiable guarantees. To bridge this gap, the paper introduces a novel integration of System-Theoretic Process Analysis (STPA) with formal specifications to systematically identify hazards in agent workflows and derive enforceable safety requirements. These requirements are then translated into executable constraints on data flows and tool invocation sequences. Building upon an enhanced Model Context Protocol (MCP) framework, the approach incorporates structured capability control and trust-labeling mechanisms to enable proactive, verifiable protection of tool interactions. By significantly reducing reliance on manual verification, this method advances LLM agent design from empirical reliability toward a paradigm grounded in formal security assurances.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based AI agents extend LLM capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents. While this empowers agents to perform complex tasks, LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records, which are unacceptable in enterprise contexts. Current approaches to mitigate these risks, such as model-based safeguards, enhance agents'reliability but cannot guarantee system safety. Methods like information flow control (IFC) and temporal constraints aim to provide guarantees but often require extensive human annotation. We propose a process that starts with applying System-Theoretic Process Analysis (STPA) to identify hazards in agent workflows, derive safety requirements, and formalize them as enforceable specifications on data flows and tool sequences. To enable this, we introduce a capability-enhanced Model Context Protocol (MCP) framework that requires structured labels on capabilities, confidentiality, and trust level. Together, these contributions aim to shift LLM-based agent safety from ad hoc reliability fixes to proactive guardrails with formal guarantees, while reducing dependence on user confirmation and making autonomy a deliberate design choice.
Problem

Research questions and friction points this paper is trying to address.

LLM agents
tool use safety
verifiable safety
hazard mitigation
enterprise risk
Innovation

Methods, ideas, or system contributions that make the work stand out.

verifiable safety
System-Theoretic Process Analysis (STPA)
Model Context Protocol (MCP)
information flow control
LLM agents
🔎 Similar Papers
No similar papers found.