Semantic-Preserving Transformations as Mutation Operators: A Study on Their Effectiveness in Defect Detection

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the feasibility of semantic-preserving code transformations (SPTs) as mutation operators to enhance the test robustness of defect detection models. Inspired by metamorphic testing, we systematically construct and manually validate 16 truly semantics-preserving SPT operators, and evaluate their impact on fine-tuned VulBERTa and PLBART models using the Devign dataset. We reveal—through rigorous validation—that only 16 out of 94 publicly available transformations are genuinely semantics-preserving, exposing significant reuse risks. Experiments show that direct application of SPTs fails to improve model accuracy, and mainstream LLM-based defect detectors exhibit low sensitivity to existing SPTs. Our contributions are threefold: (1) the first systematic empirical study of SPTs for testing-phase defect detection; (2) a reusable, human-validated SPT operator suite; and (3) the finding that current SPTs offer limited utility for improving LLM-based defect detection performance—establishing a critical benchmark and cautionary insight for future robustness evaluation.

Technology Category

Application Category

📝 Abstract
Recent advances in defect detection use language models. Existing works enhanced the training data to improve the models' robustness when applied to semantically identical code (i.e., predictions should be the same). However, the use of semantically identical code has not been considered for improving the tools during their application - a concept closely related to metamorphic testing. The goal of our study is to determine whether we can use semantic-preserving transformations, analogue to mutation operators, to improve the performance of defect detection tools in the testing stage. We first collect existing publications which implemented semantic-preserving transformations and share their implementation, such that we can reuse them. We empirically study the effectiveness of three different ensemble strategies for enhancing defect detection tools. We apply the collected transformations on the Devign dataset, considering vulnerabilities as a type of defect, and two fine-tuned large language models for defect detection (VulBERTa, PLBART). We found 28 publications with 94 different transformations. We choose to implement 39 transformations from four of the publications, but a manual check revealed that 23 out 39 transformations change code semantics. Using the 16 remaining, correct transformations and three ensemble strategies, we were not able to increase the accuracy of the defect detection models. Our results show that reusing shared semantic-preserving transformation is difficult, sometimes even causing wrongful changes to the semantics. Keywords: defect detection, language model, semantic-preserving transformation, ensemble
Problem

Research questions and friction points this paper is trying to address.

Evaluating semantic-preserving transformations for defect detection enhancement
Assessing reuse feasibility of shared code transformations
Testing ensemble strategies on defect detection model accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-preserving transformations as mutation operators
Ensemble strategies for defect detection
Reuse shared transformations from publications
🔎 Similar Papers
No similar papers found.