🤖 AI Summary
To address low efficiency, insufficient coverage, and high execution cost in metamorphic relation (MR) selection for robustness testing of large language models (LLMs), this paper proposes the first multi-objective search optimization framework tailored for MR grouping. Methodologically, we design a novel encoding scheme that integrates four evolutionary algorithms—Single-GA, NSGA-II, SPEA2, and MOEA/D—to jointly optimize fault detection rate and execution overhead, while explicitly modeling combinatorial perturbations to expand the test space. Our contributions include: (i) the first application of multi-objective evolutionary algorithms to MR selection; (ii) the discovery of cross-text-task generalizable, highly destructive “silver-bullet” MRs; and (iii) empirical validation showing MOEA/D significantly outperforms random search, achieving substantial improvements in fault detection rate—demonstrating both effectiveness and generalizability.
📝 Abstract
Assessing the trustworthiness of Large Language Models (LLMs), such as robustness, has garnered significant attention. Recently, metamorphic testing that defines Metamorphic Relations (MRs) has been widely applied to evaluate the robustness of LLM executions. However, the MR-based robustness testing still requires a scalable number of MRs, thereby necessitating the optimization of selecting MRs. Most extant LLM testing studies are limited to automatically generating test cases (i.e., MRs) to enhance failure detection. Additionally, most studies only considered a limited test space of single perturbation MRs in their evaluation of LLMs. In contrast, our paper proposes a search-based approach for optimizing the MR groups to maximize failure detection and minimize the LLM execution cost. Moreover, our approach covers the combinatorial perturbations in MRs, facilitating the expansion of test space in the robustness assessment. We have developed a search process and implemented four search algorithms: Single-GA, NSGA-II, SPEA2, and MOEA/D with novel encoding to solve the MR selection problem in the LLM robustness testing. We conducted comparative experiments on the four search algorithms along with a random search, using two major LLMs with primary Text-to-Text tasks. Our statistical and empirical investigation revealed two key findings: (1) the MOEA/D algorithm performed the best in optimizing the MR space for LLM robustness testing, and (2) we identified silver bullet MRs for the LLM robustness testing, which demonstrated dominant capabilities in confusing LLMs across different Text-to-Text tasks. In LLM robustness assessment, our research sheds light on the fundamental problem for optimized testing and provides insights into search-based solutions.