🤖 AI Summary
Existing tool-use agents suffer from poor confidence calibration, hindering reliable risk–reward trade-offs and limiting safety and practicality. To address this, we propose an internal confidence estimation method leveraging intermediate-layer representations of language models (logitLens): it decodes logits across layers, computes inter-layer generation consistency, and fuses outputs via a lightweight probabilistic classifier to dynamically assess tool-call credibility. Our approach requires no fine-tuning or labeled data and enables zero-shot generalization to unseen APIs. Evaluated on the STE benchmark and Llama-3 series models, it significantly reduces smoothed expected calibration error (SECE) and substantially improves “expected tool-call utility”—a novel metric quantifying utility-weighted correctness. The method demonstrates strong sample efficiency and cross-API robustness.
📝 Abstract
Tool-using agents that act in the world need to be both useful and safe. Well-calibrated model confidences can be used to weigh the risk versus reward of potential actions, but prior work shows that many models are poorly calibrated. Inspired by interpretability literature exploring the internals of models, we propose a novel class of model-internal confidence estimators (MICE) to better assess confidence when calling tools. MICE first decodes from each intermediate layer of the language model using logitLens and then computes similarity scores between each layer's generation and the final output. These features are fed into a learned probabilistic classifier to assess confidence in the decoded output. On the simulated trial and error (STE) tool-calling dataset using Llama3 models, we find that MICE beats or matches the baselines on smoothed expected calibration error. Using MICE confidences to determine whether to call a tool significantly improves over strong baselines on a new metric, expected tool-calling utility. Further experiments show that MICE is sample-efficient, can generalize zero-shot to unseen APIs, and results in higher tool-calling utility in scenarios with varying risk levels. Our code is open source, available at https://github.com/microsoft/mice_for_cats.