Capabilities Ain't All You Need: Measuring Propensities in AI

📅 2026-02-20
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Current AI evaluation frameworks predominantly emphasize model capabilities while overlooking the critical influence of model propensities on performance and safety. Moreover, traditional item response theory struggles to capture the non-monotonic, “too-much-of-a-good-thing” nature of such propensities. This work proposes the first formal framework that links task success probability to propensities via a dual logistic model, defining an “ideal interval” to quantify propensities and estimating them using task-agnostic scoring criteria. Moving beyond capability-centric paradigms, the approach successfully quantifies propensity shifts across six families of large language models. The measured propensities effectively predict out-of-distribution task behavior, and combining propensity with capability yields substantially improved behavioral prediction performance.

Technology Category

Application Category

📝 Abstract
AI evaluation has primarily focused on measuring capabilities, with formal approaches inspired from Item Response Theory (IRT) being increasingly applied. Yet propensities - the tendencies of models to exhibit particular behaviours - play a central role in determining both performance and safety outcomes. However, traditional IRT describes a model's success on a task as a monotonic function of model capabilities and task demands, an approach unsuited to propensities, where both excess and deficiency can be problematic. Here, we introduce the first formal framework for measuring AI propensities by using a bilogistic formulation for model success, which attributes high success probability when the model's propensity is within an"ideal band". Further, we estimate the limits of the ideal band using LLMs equipped with newly developed task-agnostic rubrics. Applying our framework to six families of LLM models whose propensities are incited in either direction, we find that we can measure how much the propensity is shifted and what effect this has on the tasks. Critically, propensities estimated using one benchmark successfully predict behaviour on held-out tasks. Moreover, we obtain stronger predictive power when combining propensities and capabilities than either separately. More broadly, our framework showcases how rigorous propensity measurements can be conducted and how it yields gains over solely using capability evaluations to predict AI behaviour.
Problem

Research questions and friction points this paper is trying to address.

AI evaluation
propensities
Item Response Theory
behavioral tendencies
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

propensities
bilogistic model
ideal band
task-agnostic rubrics
AI evaluation
🔎 Similar Papers
No similar papers found.
D
Daniel Romero-Alvarado
Valencian Research Institute of Artificial Intelligence, Universitat Politècnica de València, Valencia, Spain
Fernando MartĂ­nez-Plumed
Fernando MartĂ­nez-Plumed
VRAIN, Valencian Research Institute for Artificial Intelligence, Universitat Politecnica de Valencia
Artificial IntelligenceMachine LearningAI evaluationItem Response Theory
Lorenzo Pacchiardi
Lorenzo Pacchiardi
Research Associate, University of Cambridge
Large Language ModelsAI evaluationAI policyBayesian InferenceLikelihood-Free Inference
H
Hugo Save
Existential Risk Observatory, Amsterdam, Netherlands
S
Siddhesh Milind Pawar
University of Copenhagen, Denmark
B
Behzad Mehrbakhsh
Valencian Research Institute of Artificial Intelligence, Universitat Politècnica de València, Valencia, Spain
Pablo Antonio Moreno Casares
Pablo Antonio Moreno Casares
Quantum algorithm scientist at Xanadu.ai
FĂ­sica teĂłricacomputaciĂłn cuĂĄnticaquantum algorithmsquantum chemistry
B
Ben Slater
Leverhulme Centre for the Future of Intelligence, University of Cambridge
P
Paolo Bova
Department of Computing & Games, University of Teesside
Peter Romero
Peter Romero
Universidad Politècnica de València, University of Cambridge
People AnalyticsPsychometricsDeep LearningAlgebraic TopologyCybernetics
Z
Zachary R. Tyler
Georgia Institute of Technology
J
Jonathan Prunty
Leverhulme Centre for the Future of Intelligence, University of Cambridge
Luning Sun
Luning Sun
Lawrence Livermore National Lab
AI for ScienceScientific Machine LearningUncertainty QuantificationCFDVariational Inference
J
Jose Hernandez-Orallo
Valencian Research Institute of Artificial Intelligence, Universitat Politècnica de València, Valencia, Spain; Leverhulme Centre for the Future of Intelligence, University of Cambridge