PredictaBoard: Benchmarking LLM Score Predictability

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit high variance in error rates on commonsense reasoning tasks, undermining their safe deployment. Method: This paper introduces PredictaBoard, the first benchmark framework centered on *instance-level error predictability*—i.e., predicting the probability that a model will err on a given prompt—to enable “foreseeable safety.” It establishes an instance-level evaluation protocol and quantifies predictor performance via rejection-rate–error-tolerance curves, supporting multi-model and multi-evaluator assessment. Contribution/Results: Experiments reveal that existing baseline predictors perform poorly, underscoring the critical need for predictive error assessment. PredictaBoard provides a novel, open-source evaluation paradigm for trustworthy, human-intervenable AI systems, shifting LLM safety evaluation from aggregate performance metrics toward fine-grained, controllable error prediction.

Technology Category

Application Category

📝 Abstract
Despite possessing impressive skills, Large Language Models (LLMs) often fail unpredictably, demonstrating inconsistent success in even basic common sense reasoning tasks. This unpredictability poses a significant challenge to ensuring their safe deployment, as identifying and operating within a reliable"safe zone"is essential for mitigating risks. To address this, we present PredictaBoard, a novel collaborative benchmarking framework designed to evaluate the ability of score predictors (referred to as assessors) to anticipate LLM errors on specific task instances (i.e., prompts) from existing datasets. PredictaBoard evaluates pairs of LLMs and assessors by considering the rejection rate at different tolerance errors. As such, PredictaBoard stimulates research into developing better assessors and making LLMs more predictable, not only with a higher average performance. We conduct illustrative experiments using baseline assessors and state-of-the-art LLMs. PredictaBoard highlights the critical need to evaluate predictability alongside performance, paving the way for safer AI systems where errors are not only minimised but also anticipated and effectively mitigated. Code for our benchmark can be found at https://github.com/Kinds-of-Intelligence-CFI/PredictaBoard
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLM error predictability
Develops safer AI deployment strategies
Benchmarks assessors for anticipating LLM failures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative benchmarking framework
Evaluates LLM error predictability
Uses rejection rate metrics
🔎 Similar Papers
No similar papers found.