🤖 AI Summary
This study addresses the limitation of current large language model (LLM) agent evaluations, which predominantly rely on task completion rates and fail to discern differences in intermediate state-tracking capabilities. To overcome this, the authors propose the WMF-AM probe—a method that quantifies an LLM’s ability to track cumulative arithmetic states without access to external scratchpads. Validated across 20 open-source models, WMF-AM is rigorously analyzed using K-calibrated probing, Bonferroni correction, and partial Kendall tau analyses with controlled variables, complemented by three types of constructive ablation experiments. The findings establish state-tracking ability as a predictor distinct from both task completion rate and model scale. WMF-AM scores show a significant correlation with agent performance (Kendall’s τ = 0.612, p < 0.001), a relationship that remains robust even after controlling for confounding factors.
📝 Abstract
Task-completion rate is the standard proxy for LLM agent capability, but models with identical completion scores can differ substantially in their ability to track intermediate state. We introduce Working Memory Fidelity-Active Manipulation (WMF-AM), a calibrated no-scratchpad probe of cumulative arithmetic state tracking, and evaluate it on 20 open-weight models (0.5B-35B, 13 families) against a released deterministic 10-task agent battery. In a pre-specified, Bonferroni-corrected analysis, WMF-AM predicts agent performance with Kendall's tau = 0.612 (p < 0.001, 95% CI [0.360, 0.814]); exploratory partial-tau analyses suggest this signal persists after controlling for completion score and model scale. Three construct-isolation ablations (K = 1 control, non-arithmetic ceiling, yoked cancellation) support the interpretation that cumulative state tracking under load, rather than single-step arithmetic or entity tracking alone, is the primary difficulty source. K-calibration keeps the probe in a discriminative range where prior fixed-depth benchmarks become non-discriminative; generalization beyond this open-weight sample remains open.