🤖 AI Summary
Large language models (LLMs) exhibit high sensitivity to minor prompt perturbations, resulting in unstable and inconsistent outputs—a critical challenge for reliable deployment. To address this, we formally define the task of *prompt sensitivity prediction* and introduce PromptSET, the first dedicated benchmark dataset. PromptSET is constructed from TriviaQA and HotpotQA, incorporating diverse human-crafted and algorithmically perturbed prompt variants, and evaluates response stability across multiple LLMs. We systematically assess mainstream approaches—including LLM self-assessment, text classification, and query performance prediction—and find they consistently underperform, with accuracy significantly below random baselines. Leveraging prompt perturbation analysis, cross-model response comparison, and quantitative sensitivity modeling, we demonstrate that precise prompt formulation is pivotal for enhancing LLM robustness. Our work establishes a new paradigm for prompt engineering and trustworthy model evaluation, grounded in empirical evidence and methodological rigor.
📝 Abstract
Large language Models (LLMs) are highly sensitive to variations in prompt formulation, which can significantly impact their ability to generate accurate responses. In this paper, we introduce a new task, Prompt Sensitivity Prediction, and a dataset PromptSET designed to investigate the effects of slight prompt variations on LLM performance. Using TriviaQA and HotpotQA datasets as the foundation of our work, we generate prompt variations and evaluate their effectiveness across multiple LLMs. We benchmark the prompt sensitivity prediction task employing state-of-the-art methods from related tasks, including LLM-based self-evaluation, text classification, and query performance prediction techniques. Our findings reveal that existing methods struggle to effectively address prompt sensitivity prediction, underscoring the need to understand how information needs should be phrased for accurate LLM responses.