Pay Attention to Real World Perturbations! Natural Robustness Evaluation in Machine Reading Comprehension

📅 2025-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the insufficient robustness evaluation of machine reading comprehension (MRC) models under realistic text perturbations. We introduce, for the first time, a natural perturbation framework grounded in Wikipedia edit histories, enabling the construction of the first real-world-driven MRC robustness benchmark. Methodologically, we mine edit histories, automatically align perturbed passages, and conduct large-scale benchmarking across multiple models—including Flan-T5 and large language models—on standard datasets such as SQuAD. Our key contributions are threefold: (1) We reveal that pretrained encoders and mainstream LLMs suffer up to 30% F1 degradation under natural perturbations, with vulnerability generalizing across datasets; (2) We demonstrate that synthetic perturbations fail to replicate authentic degradation patterns; (3) We show that fine-tuning on naturally perturbed data improves robustness, yet fails to fully recover original performance—establishing a new evaluation standard and actionable direction for advancing MRC robustness research.

Technology Category

Application Category

📝 Abstract
As neural language models achieve human-comparable performance on Machine Reading Comprehension (MRC) and see widespread adoption, ensuring their robustness in real-world scenarios has become increasingly important. Current robustness evaluation research, though, primarily develops synthetic perturbation methods, leaving unclear how well they reflect real life scenarios. Considering this, we present a framework to automatically examine MRC models on naturally occurring textual perturbations, by replacing paragraph in MRC benchmarks with their counterparts based on available Wikipedia edit history. Such perturbation type is natural as its design does not stem from an arteficial generative process, inherently distinct from the previously investigated synthetic approaches. In a large-scale study encompassing SQUAD datasets and various model architectures we observe that natural perturbations result in performance degradation in pre-trained encoder language models. More worryingly, these state-of-the-art Flan-T5 and Large Language Models (LLMs) inherit these errors. Further experiments demonstrate that our findings generalise to natural perturbations found in other more challenging MRC benchmarks. In an effort to mitigate these errors, we show that it is possible to improve the robustness to natural perturbations by training on naturally or synthetically perturbed examples, though a noticeable gap still remains compared to performance on unperturbed data.
Problem

Research questions and friction points this paper is trying to address.

Evaluating natural robustness in MRC
Impact of real-world textual perturbations
Improving model robustness with training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Natural textual perturbations evaluation
Wikipedia edit history utilization
Training with perturbed examples improvement
🔎 Similar Papers
2024-07-12arXiv.orgCitations: 8