🤖 AI Summary
Existing AI agents often fail in complex tasks due to overconfidence, and static calibration methods struggle to address compounded errors, tool uncertainty, and opaque failures along execution trajectories. This work formalizes the agent confidence calibration problem for the first time and introduces the Holistic Trajectory Calibration (HTC) framework, which models both macro-level dynamics and micro-level stability of execution trajectories to enable interpretable, cross-domain, and generalizable process-level calibration. The proposed General Agent Calibrator (GAC) consistently outperforms strong baselines across eight benchmarks, diverse large language models, and agent architectures, achieving state-of-the-art performance in Expected Calibration Error (ECE) and discrimination ability—particularly excelling on the cross-domain GAIA benchmark.
📝 Abstract
AI agents are rapidly advancing from passive language models to autonomous systems executing complex, multi-step tasks. Yet their overconfidence in failure remains a fundamental barrier to deployment in high-stakes settings. Existing calibration methods, built for static single-turn outputs, cannot address the unique challenges of agentic systems, such as compounding errors along trajectories, uncertainty from external tools, and opaque failure modes. To address these challenges, we introduce, for the first time, the problem of Agentic Confidence Calibration and propose Holistic Trajectory Calibration (HTC), a novel diagnostic framework that extracts rich process-level features ranging from macro dynamics to micro stability across an agent's entire trajectory. Powered by a simple, interpretable model, HTC consistently surpasses strong baselines in both calibration and discrimination, across eight benchmarks, multiple LLMs, and diverse agent frameworks. Beyond performance, HTC delivers three essential advances: it provides interpretability by revealing the signals behind failure, enables transferability by applying across domains without retraining, and achieves generalization through a General Agent Calibrator (GAC) that achieves the best calibration (lowest ECE) on the out-of-domain GAIA benchmark. Together, these contributions establish a new process-centric paradigm for confidence calibration, providing a framework for diagnosing and enhancing the reliability of AI agents.