🤖 AI Summary
This work addresses the limitations of existing ethical evaluations for large language models, which predominantly rely on single-turn interactions and average performance metrics, thereby failing to capture rare yet high-risk ethical failures emerging in multi-turn adversarial settings. To overcome this, the authors propose the Adversarial Moral Stress Testing (AMST) framework, which leverages structured prompt transformations and simulated multi-turn adversarial scenarios. AMST introduces, for the first time, distribution-aware robustness metrics that explicitly account for behavioral variance, tail risk, and temporal drift, uncovering degradation patterns overlooked by conventional approaches. Experiments across models including LLaMA-3-8B, GPT-4o, and DeepSeek-v3 demonstrate that AMST effectively discriminates between models in terms of ethical robustness and reveals vulnerabilities undetectable by single-turn assessments.
📝 Abstract
Evaluating the ethical robustness of large language models (LLMs) deployed in software systems remains challenging, particularly under sustained adversarial user interaction. Existing safety benchmarks typically rely on single-round evaluations and aggregate metrics, such as toxicity scores and refusal rates, which offer limited visibility into behavioral instability that may arise during realistic multi-turn interactions. As a result, rare but high-impact ethical failures and progressive degradation effects may remain undetected prior to deployment. This paper introduces Adversarial Moral Stress Testing (AMST), a stress-based evaluation framework for assessing ethical robustness under adversarial multi-round interactions. AMST applies structured stress transformations to prompts and evaluates model behavior through distribution-aware robustness metrics that capture variance, tail risk, and temporal behavioral drift across interaction rounds. We evaluate AMST on several state-of-the-art LLMs, including LLaMA-3-8B, GPT-4o, and DeepSeek-v3, using a large set of adversarial scenarios generated under controlled stress conditions. The results demonstrate substantial differences in robustness profiles across models and expose degradation patterns that are not observable under conventional single-round evaluation protocols. In particular, robustness has been shown to depend on distributional stability and tail behavior rather than on average performance alone. Additionally, AMST provides a scalable and model-agnostic stress-testing methodology that enables robustness-aware evaluation and monitoring of LLM-enabled software systems operating in adversarial environments.