🤖 AI Summary
This work addresses the lack of logical safety and verifiability in current large language models (LLMs) when invoking external tools, a limitation that often leads to errors or hallucinations due to unverified natural language reasoning. To mitigate this, we propose ToolGate, a novel framework that introduces formal Hoare-style contracts—comprising preconditions and postconditions—into LLM tool invocation for the first time. By integrating symbolic state representations, typed key-value mappings, and runtime verification, ToolGate ensures that state transitions are driven exclusively by verified tool executions. This approach establishes a logically sound, verifiable, and hallucination-resistant mechanism for state updates, significantly enhancing system reliability and performance in complex multi-step reasoning tasks.
📝 Abstract
Large Language Models (LLMs) augmented with external tools have demonstrated remarkable capabilities in complex reasoning tasks. However, existing frameworks rely heavily on natural language reasoning to determine when tools can be invoked and whether their results should be committed, lacking formal guarantees for logical safety and verifiability. We present \textbf{ToolGate}, a forward execution framework that provides logical safety guarantees and verifiable state evolution for LLM tool calling. ToolGate maintains an explicit symbolic state space as a typed key-value mapping representing trusted world information throughout the reasoning process. Each tool is formalized as a Hoare-style contract consisting of a precondition and a postcondition, where the precondition gates tool invocation by checking whether the current state satisfies the required conditions, and the postcondition determines whether the tool's result can be committed to update the state through runtime verification. Our approach guarantees that the symbolic state evolves only through verified tool executions, preventing invalid or hallucinated results from corrupting the world representation. Experimental validation demonstrates that ToolGate significantly improves the reliability and verifiability of tool-augmented LLM systems while maintaining competitive performance on complex multi-step reasoning tasks. This work establishes a foundation for building more trustworthy and debuggable AI systems that integrate language models with external tools.