🤖 AI Summary
This study addresses the lack of robustness in enterprise-grade large language models (LLMs) against minor input perturbations—a critical issue often overlooked by existing evaluations confined to academic settings. The authors present the first multidimensional perturbation benchmark tailored for enterprise applications, encompassing 11 perturbation types including textual edits, format variations (e.g., JSON/YAML), multilingual inputs, and instruction reordering. They systematically evaluate 11 prominent LLMs ranging from 4B to over 120B parameters. Results reveal that minor perturbations can degrade performance by up to 40 percentage points. Notably, model scale exhibits a nonlinear relationship with robustness: Mistral 3 8B consistently outperforms larger models, while Llama 3.1 8B performs worst, underscoring the pivotal roles of architecture and training data—and challenging the prevailing assumption that larger models are inherently more robust.
📝 Abstract
Enterprise LLM applications require consistently high quality and reliable performance across diverse scenarios, demanding robustness to minor variations. Existing research shows that even small prompt changes can lead to substantial differences in output, but has mainly focused on a narrow set of perturbations with small academic datasets, limiting their relevance to real-world applications. To address this, we present a comprehensive benchmark suite that evaluates robustness across multiple perturbation types, including general text edits (e.g., punctuation, whitespace), formatting changes (e.g., JSON, YAML), multilingual and cross-lingual inputs, and positional variations in instructions. Evaluating 11 models ranging from 4B to 120B+ parameters, we find that minor perturbations reduce performance by up to 40 percentage points on key enterprise metrics. Critically, we demonstrate that the relationship between model size and robustness is more nuanced than conventional assumptions suggest: an 8B parameter model (Ministral 3 8B) outperforms most larger models, while another 8B model (Llama 3.1 8B) performs worst overall.