🤖 AI Summary
This study systematically evaluates the robustness of large language models (LLMs) on numerical fact-checking—specifically, their stability in judging the veracity of numerical claims against supporting evidence. Method: We propose a label-flipping–based controllable numerical perturbation probing method and conduct multi-dimensional controlled experiments across diverse LLMs, context lengths, and in-context demonstration samples. Results: Experiments reveal up to a 62% accuracy drop under perturbation; no model exhibits comprehensive robustness across perturbation types, exposing fundamental deficiencies in numerical reasoning. Crucially, we discover that injecting perturbed demonstrations into extended contexts significantly restores performance—offering a practical, context-aware pathway for enhancing numerical robustness. This work is the first to quantify LLM vulnerability in numerical fact verification using standardized probes and to empirically demonstrate the efficacy of context-informed robustness enhancement.
📝 Abstract
Large language models show strong performance on knowledge intensive tasks such as fact-checking and question answering, yet they often struggle with numerical reasoning. We present a systematic evaluation of state-of-the-art models for veracity prediction on numerical claims and evidence pairs using controlled perturbations, including label-flipping probes, to test robustness. Our results indicate that even leading proprietary systems experience accuracy drops of up to 62% under certain perturbations. No model proves to be robust across all conditions. We further find that increasing context length generally reduces accuracy, but when extended context is enriched with perturbed demonstrations, most models substantially recover. These findings highlight critical limitations in numerical fact-checking and suggest that robustness remains an open challenge for current language models.