🤖 AI Summary
This work addresses the challenge of systematically testing large language model (LLM) applications, which are prone to generating inaccurate, hallucinated, or harmful responses, yet operate in high-dimensional input spaces that hinder comprehensive evaluation. The study introduces evolutionary search into LLM testing for the first time, formulating test input generation as an optimization problem. By discretizing the input space into three interpretable feature categories—style, content, and perturbation—the method employs evolutionary algorithms to dynamically explore input combinations most likely to trigger failures. This approach overcomes limitations of conventional prompt-tuning or coverage-based heuristics, substantially improving testing efficiency. Experiments on three dialogue-based question-answering systems demonstrate that the proposed method discovers, on average, 2.5 times more failures than baseline techniques, with a maximum improvement of 4.3×.
📝 Abstract
Large Language Model (LLM)-based applications are increasingly deployed across various domains, including customer service, education, and mobility. However, these systems are prone to inaccurate, fictitious, or harmful responses, and their vast, high-dimensional input space makes systematic testing particularly challenging. To address this, we present STELLAR, an automated search-based testing framework for LLM-based applications that systematically uncovers text inputs leading to inappropriate system responses. Our framework models test generation as an optimization problem and discretizes the input space into stylistic, content-related, and perturbation features. Unlike prior work that focuses on prompt optimization or coverage heuristics, our work employs evolutionary optimization to dynamically explore feature combinations that are more likely to expose failures. We evaluate STELLAR on three LLM-based conversational question-answering systems. The first focuses on safety, benchmarking both public and proprietary LLMs against malicious or unsafe prompts. The second and third target navigation, using an open-source and an industrial retrieval-augmented system for in-vehicle venue recommendations. Overall, STELLAR exposes up to 4.3 times (average 2.5 times) more failures than the existing baseline approaches.