Auto-SPT: Automating Semantic Preserving Transformations for Code

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing code clone detection models are trained on clean data but exhibit poor robustness against semantic-preserving transformations (e.g., refactoring, minification, formatting, compiler optimizations) prevalent in real-world code, leading to train-deploy distribution shift. To address this, we propose Auto-SPT—a novel framework that leverages large language models (LLMs) to automatically discover, compose, and formally verify diverse semantic-preserving transformations for synthesizing highly robust training data. Its core contributions are: (1) LLM-driven automatic construction and strength-aware modeling of transformations; (2) a scalable composition mechanism enabling combinatorial transformation generation; and (3) a task-specific adversarial data augmentation paradigm tailored for clone detection. Experiments show that Auto-SPT–generated transformations substantially degrade state-of-the-art models’ accuracy (average drop of 32.7%), while using them for augmentation improves model F1 scores by up to 18.4% across diverse perturbations, effectively bridging the robustness gap in practical deployment scenarios.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) models for code clone detection determine whether two pieces of code are semantically equivalent, which in turn is a key building block for software-engineering tasks like refactoring and security tasks like vulnerability and malware detection. While these models are predominantly trained on clean, structured code datasets, real-world code often undergoes a variety of semantic-preserving transformations, including refactoring, minification, automated formatting, and compiler optimizations. To address this critical gap between training and test data, we propose Auto-SPT, a novel framework to automatically construct synthetic-data generators for code. Auto-SPT is designed to produce Semantic Preserving Transformations (SPTs) that alter a program's syntactic structure while preserving its functionality and is instantiated on top of Large Language Models (LLMs). In particular, we use LLMs to craft a diverse set of SPTs, generate strong implementations for these SPTs, and compose them to result into strong transformations. Our formal analysis shows that the diversity of SPTs impacts the strength of their composition. We then empirically demonstrate that Auto-SPT generates more diverse SPTs than existing approaches and these SPTs significantly drop the performance of state-of-the-art code clone detectors. Further experiments show Auto-SPT can be used to enhance code datasets for training, to produce code-clone detection models that are robust to real-world, adversarial code transformations.
Problem

Research questions and friction points this paper is trying to address.

Addresses gap between clean training data and real-world transformed code
Automates generation of semantic-preserving code transformations using LLMs
Enhances robustness of code clone detection models against adversarial changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically constructs synthetic-data generators for code
Uses LLMs to craft diverse semantic-preserving transformations
Enhances training datasets for robust code clone detection
🔎 Similar Papers