🤖 AI Summary
Large language models (LLMs) exhibit insufficient robustness and unpredictable behavior under linguistic perturbations in in-context learning (ICL).
Method: This paper proposes MMT4NL, the first metamorphic testing framework tailored for natural language, grounded in the principle “language as code, prompts as programs.” It integrates mutation testing and adversarial testing by generating semantically preserved yet syntactically transformed prompt variants to enable quantitative assessment and precise localization of prompt-level defects.
Contribution/Results: MMT4NL is the first to systematically adapt software testing paradigms to ICL trustworthiness evaluation. It achieves high-sensitivity detection of latent linguistic defects in state-of-the-art models across diverse tasks—including sentiment analysis and question answering—and provides interpretable robustness diagnostics. Extensive experiments validate its effectiveness and generalizability across models and tasks.
📝 Abstract
In-context learning (ICL) has emerged as a powerful capability of large language models (LLMs), enabling them to perform new tasks based on a few provided examples without explicit fine-tuning. Despite their impressive adaptability, these models remain vulnerable to subtle adversarial perturbations and exhibit unpredictable behavior when faced with linguistic variations. Inspired by software testing principles, we introduce a software testing-inspired framework, called MMT4NL, for evaluating the trustworthiness of in-context learning by utilizing adversarial perturbations and software testing techniques. It includes diverse evaluation aspects of linguistic capabilities for testing the ICL capabilities of LLMs. MMT4NL is built around the idea of crafting metamorphic adversarial examples from a test set in order to quantify and pinpoint bugs in the designed prompts of ICL. Our philosophy is to treat any LLM as software and validate its functionalities just like testing the software. Finally, we demonstrate applications of MMT4NL on the sentiment analysis and question-answering tasks. Our experiments could reveal various linguistic bugs in state-of-the-art LLMs.