BASFuzz: Towards Robustness Evaluation of LLM-based NLP Software via Automated Fuzz Testing

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address two key challenges in robustness evaluation of LLM-based NLP software—insufficient coupling between testing methodologies and model behavior, and diminished fuzzing efficacy in NLG scenarios—this paper proposes BASFuzz. It is the first framework to treat prompt-example pairs as unified fuzzing targets. BASFuzz introduces a text-consistency metric to guide input mutation and innovatively integrates beam search with simulated annealing into a novel Beam-Annealing algorithm. Furthermore, it employs information entropy to adaptively modulate mutation intensity and incorporates an elitist preservation strategy to enhance search efficiency. Evaluated across six representative generation and understanding tasks, BASFuzz achieves 90.34% test effectiveness, outperforming the best baseline by an average speedup of 2,163.85 seconds, while significantly improving both defect detection rate and test coverage.

Technology Category

Application Category

📝 Abstract
Fuzzing has shown great success in evaluating the robustness of intelligent natural language processing (NLP) software. As large language model (LLM)-based NLP software is widely deployed in critical industries, existing methods still face two main challenges: 1 testing methods are insufficiently coupled with the behavioral patterns of LLM-based NLP software; 2 fuzzing capability for the testing scenario of natural language generation (NLG) generally degrades. To address these issues, we propose BASFuzz, an efficient Fuzz testing method tailored for LLM-based NLP software. BASFuzz targets complete test inputs composed of prompts and examples, and uses a text consistency metric to guide mutations of the fuzzing loop, aligning with the behavioral patterns of LLM-based NLP software. A Beam-Annealing Search algorithm, which integrates beam search and simulated annealing, is employed to design an efficient fuzzing loop. In addition, information entropy-based adaptive adjustment and an elitism strategy further enhance fuzzing capability. We evaluate BASFuzz on six datasets in representative scenarios of NLG and natural language understanding (NLU). Experimental results demonstrate that BASFuzz achieves a testing effectiveness of 90.335% while reducing the average time overhead by 2,163.852 seconds compared to the current best baseline, enabling more effective robustness evaluation prior to software deployment.
Problem

Research questions and friction points this paper is trying to address.

Evaluating robustness of LLM-based NLP software via automated fuzz testing
Addressing insufficient coupling with LLM behavioral patterns in testing
Improving fuzzing capability degradation in natural language generation scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Beam-Annealing Search algorithm for fuzzing loop
Text consistency metric to guide input mutations
Information entropy-based adaptive adjustment strategy
Mingxuan Xiao
Mingxuan Xiao
Hohai University
Y
Yan Xiao
Sun Yat-sen University, China
S
Shunhui Ji
Hohai University, China
J
Jiahe Tu
Hohai University, China
Pengcheng Zhang
Pengcheng Zhang
Beihang University
computer vision