🤖 AI Summary
This work addresses the issue of error propagation and cumulative information distortion in sequential tool usage by large language model agents. We propose the first analytical framework grounded in martingale theory, incorporating a hybrid distortion metric that combines discrete factual matching with continuous semantic similarity. Theoretically, we prove that distortion grows linearly over time and that high-probability deviation is bounded by $O(\sqrt{T})$. Building on these insights, we derive practical error-control principles, including semantic weighting and periodic re-anchoring. Experiments on Qwen2-7B, Llama-3-8B, and Mistral-7B validate our theoretical predictions: distortion exhibits linear growth, semantic weighting reduces distortion by 80%, and re-anchoring every nine steps effectively mitigates error accumulation, significantly enhancing the predictability of agent behavior.
📝 Abstract
As AI agents powered by large language models (LLMs) increasingly use external tools for high-stakes decisions, a critical reliability question arises: how do errors propagate across sequential tool calls? We introduce the first theoretical framework for analyzing error accumulation in Model Context Protocol (MCP) agents, proving that cumulative distortion exhibits linear growth and high-probability deviations bounded by $O(\sqrt{T})$. This concentration property ensures predictable system behavior and rules out exponential failure modes. We develop a hybrid distortion metric combining discrete fact matching with continuous semantic similarity, then establish martingale concentration bounds on error propagation through sequential tool interactions. Experiments across Qwen2-7B, Llama-3-8B, and Mistral-7B validate our theoretical predictions, showing empirical distortion tracks the linear trend with deviations consistently within $O(\sqrt{T})$ envelopes. Key findings include: semantic weighting reduces distortion by 80\%, and periodic re-grounding approximately every 9 steps suffices for error control. We translate these concentration guarantees into actionable deployment principles for trustworthy agent systems.