🤖 AI Summary
This work systematically investigates the robustness degradation of Retrieval-Augmented Generation (RAG) systems under real-world user query linguistic variations—specifically formality, readability, politeness, and grammatical correctness. Leveraging four QA datasets, we jointly evaluate two retrieval models and nine large language models (LLMs) spanning 3B–72B parameters, quantifying the impact of each variation dimension on Recall@5 and answer exact-match accuracy. Contrary to common assumptions, we empirically demonstrate that RAG systems are *more* vulnerable to linguistic variation than standalone LLMs, due to error propagation across retrieval and generation components. We introduce the first comprehensive four-dimensional linguistic variation benchmark for RAG evaluation. Experiments reveal that informal queries reduce Recall@5 by up to 40.41%, while grammatical errors degrade answer match rates by up to 38.86%. These findings expose critical failure modes in practical RAG deployment and provide empirical foundations and methodological guidance for robustness-aware modeling and system optimization.
📝 Abstract
Despite the impressive performance of Retrieval-augmented Generation (RAG) systems across various NLP benchmarks, their robustness in handling real-world user-LLM interaction queries remains largely underexplored. This presents a critical gap for practical deployment, where user queries exhibit greater linguistic variations and can trigger cascading errors across interdependent RAG components. In this work, we systematically analyze how varying four linguistic dimensions (formality, readability, politeness, and grammatical correctness) impact RAG performance. We evaluate two retrieval models and nine LLMs, ranging from 3 to 72 billion parameters, across four information-seeking Question Answering (QA) datasets. Our results reveal that linguistic reformulations significantly impact both retrieval and generation stages, leading to a relative performance drop of up to 40.41% in Recall@5 scores for less formal queries and 38.86% in answer match scores for queries containing grammatical errors. Notably, RAG systems exhibit greater sensitivity to such variations compared to LLM-only generations, highlighting their vulnerability to error propagation due to linguistic shifts. These findings highlight the need for improved robustness techniques to enhance reliability in diverse user interactions.