🤖 AI Summary
This work addresses the limitations of traditional DevSecOps practices, which often neglect proactive security integration and lack forward-looking models of attacker behavior, thereby struggling to defend cloud environments against sophisticated threats. To overcome these challenges, the paper introduces a novel automated approach that uniquely integrates large language models (LLMs) with Security Chaos Engineering (SCE). Specifically, LLMs are leveraged to generate attack-defense trees that simulate plausible attack paths, which in turn inform the design of SCE experiments. This methodology enables proactive prediction of unknown threats and facilitates the preemptive deployment of defensive strategies. Furthermore, it establishes a reproducible, LLM-driven security validation pipeline, significantly enhancing the proactive defense capabilities of DevSecOps teams.
📝 Abstract
The most valuable asset of any cloud-based organization is data, which is increasingly exposed to sophisticated cyberattacks. Until recently, the implementation of security measures in DevOps environments was often considered optional by many government entities and critical national services operating in the cloud. This includes systems managing sensitive information, such as electoral processes or military operations, which have historically been valuable targets for cybercriminals. Resistance to security implementation is often driven by concerns over losing agility in software development, increasing the risk of accumulated vulnerabilities. Nowadays, patching software is no longer enough; adopting a proactive cyber defense strategy, supported by Artificial Intelligence (AI), is crucial to anticipating and mitigating threats. Thus, this work proposes integrating the Security Chaos Engineering (SCE) methodology with a new LLM-based flow to automate the creation of attack defense trees that represent adversary behavior and facilitate the construction of SCE experiments based on these graphical models, enabling teams to stay one step ahead of attackers and implement previously unconsidered defenses. Further detailed information about the experiment performed, along with the steps to replicate it, can be found in the following repository: https://github.com/mariomc14/devsecops-adversary-llm.git.