🤖 AI Summary
Large language models (LLMs) frequently generate insecure infrastructure-as-code (IaC) configurations, and no systematic, security-aware fine-tuning methodology exists for IaC. Method: We propose GenSIaC—the first security-aware instruction-tuning dataset and framework for IaC—comprising curated, multi-cloud (AWS/Azure/GCP), multi-language (Terraform/CloudFormation), and vulnerability-pattern-annotated instruction data, coupled with cross-model and cross-language ablation and generalization evaluations. Contribution/Results: Instruction-tuned LLMs achieve a substantial improvement in IaC security defect identification and mitigation, with F1 scores rising from 0.303 to 0.858. Results demonstrate strong generalization across models, cloud platforms, and IaC languages, validating GenSIaC’s practicality and scalability. This work establishes a new paradigm for security-driven, AI-native cloud infrastructure development.
📝 Abstract
In recent years, Infrastructure as Code (IaC) has emerged as a critical approach for managing and provisioning IT infrastructure through code and automation. IaC enables organizations to create scalable and consistent environments, effectively managing servers and development settings. However, the growing complexity of cloud infrastructures has led to an increased risk of misconfigurations and security vulnerabilities in IaC scripts. To address this problem, this paper investigates the potential of Large Language Models (LLMs) in generating security-aware IaC code, avoiding misconfigurations introduced by developers and administrators.
While LLMs have made significant progress in natural language processing and code generation, their ability to generate secure IaC scripts remains unclear. This paper addresses two major problems: 1) the lack of understanding of security weaknesses in IaC scripts generated by LLMs, and 2) the absence of techniques for enhancing security in generating IaC code with LLMs.
To assess the extent to which LLMs contain security knowledge, we first conduct a comprehensive evaluation of base LLMs in recognizing major IaC security weaknesses during the generation and inspection of IaC code. Then, we propose GenSIaC, an instruction fine-tuning dataset designed to improve LLMs' ability to recognize potential security weaknesses. Leveraging GenSIaC, we fine-tune LLMs and instruct models to generate security-aware IaC code. Our evaluation demonstrates that our models achieve substantially improved performance in recognizing and preventing IaC security misconfigurations, e.g., boosting the F1-score from 0.303 to 0.858. Additionally, we perform ablation studies and explore GenSIaC's generalizability to other LLMs and its cross-language capabilities.