🤖 AI Summary
This work addresses the limited robustness of large language models (LLMs) in code-related tasks, where they are highly sensitive to input perturbations and prone to generating incorrect or unsafe code. The study presents the first systematic evaluation of LLM robustness under multi-granular perturbations—spanning character-, word-, and sentence-level modifications—and proposes enhancing resilience through supervised fine-tuning. Experimental results demonstrate that this approach significantly reduces the robustness degradation rate (RD) by 4%–6% across code generation and completion tasks, with only a minor performance trade-off reflected in a 1%–3% drop in pass@1 accuracy. These findings highlight an inherent trade-off between model robustness and task-specific performance, offering insights into more reliable deployment of code LLMs in real-world scenarios.
📝 Abstract
Context: In the fast-paced evolution of software development, Large Language Models (LLMs) have become indispensable tools for tasks such as code generation, completion, analysis, and bug fixing. Ensuring the robustness of these models against potential vulnerabilities from handling diverse inputs is critical, as variations in input can lead to incorrect or insecure code outputs. Objective: This work aims to improve the robustness of LLMs for coding-related tasks against potential adversarial inputs. Specifically, we investigate how fine-tuning LLMs with perturbed datasets impacts their robustness against input perturbations. Method: We systematically evaluated LLM robustness by fine-tuning models using datasets perturbed at character-level, word-level, and sentence-level, comparing results against base models and models fine-tuned on unperturbed datasets. Results: Fine-tuning LLMs with perturbed datasets significantly improves model robustness (RD usually drops around 4\% - 6\%), especially for models with relatively weak robustness. However, this fine-tuning process typically results in a slight performance decrease (pass@1 usually drops around 1\% - 3\%) compared to fine-tuning with unperturbed datasets, although occasional performance improvements are observed. Conclusion \&Implications: Fine-tuning LLMs for coding tasks with perturbed data effectively enhances their robustness at the cost of a minor performance reduction, emphasizing the importance of balancing the robustness and performance of LLMs for coding applications.