đ¤ AI Summary
Current AI evaluation frameworks predominantly emphasize model capabilities while overlooking the critical influence of model propensities on performance and safety. Moreover, traditional item response theory struggles to capture the non-monotonic, âtoo-much-of-a-good-thingâ nature of such propensities. This work proposes the first formal framework that links task success probability to propensities via a dual logistic model, defining an âideal intervalâ to quantify propensities and estimating them using task-agnostic scoring criteria. Moving beyond capability-centric paradigms, the approach successfully quantifies propensity shifts across six families of large language models. The measured propensities effectively predict out-of-distribution task behavior, and combining propensity with capability yields substantially improved behavioral prediction performance.
đ Abstract
AI evaluation has primarily focused on measuring capabilities, with formal approaches inspired from Item Response Theory (IRT) being increasingly applied. Yet propensities - the tendencies of models to exhibit particular behaviours - play a central role in determining both performance and safety outcomes. However, traditional IRT describes a model's success on a task as a monotonic function of model capabilities and task demands, an approach unsuited to propensities, where both excess and deficiency can be problematic. Here, we introduce the first formal framework for measuring AI propensities by using a bilogistic formulation for model success, which attributes high success probability when the model's propensity is within an"ideal band". Further, we estimate the limits of the ideal band using LLMs equipped with newly developed task-agnostic rubrics. Applying our framework to six families of LLM models whose propensities are incited in either direction, we find that we can measure how much the propensity is shifted and what effect this has on the tasks. Critically, propensities estimated using one benchmark successfully predict behaviour on held-out tasks. Moreover, we obtain stronger predictive power when combining propensities and capabilities than either separately. More broadly, our framework showcases how rigorous propensity measurements can be conducted and how it yields gains over solely using capability evaluations to predict AI behaviour.