Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of large language model (LLM)-based web attack detectors in identifying cross-site scripting (XSS) and SQL injection (SQLi) variants, this paper proposes a synergistic framework integrating retrieval-augmented generation (RAG) with self-ranking. The framework dynamically injects external security knowledge and employs multi-path prompt engineering alongside an F2-score–driven re-ranking mechanism for reasoning paths, thereby significantly enhancing generalization to unseen attack variants and reducing both false positives and false negatives. Experimental results demonstrate that the proposed method improves F2-scores by 71.0 and 37.0 percentage points on XSS and SQLi detection tasks, respectively—substantially outperforming existing baselines. To the best of our knowledge, this is the first systematic approach that deeply couples RAG with self-ranking specifically to enhance the robustness of native LLM-based detectors.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly used in software development to generate functions, such as attack detectors, that implement security requirements. However, LLMs struggle to generate accurate code, resulting, e.g., in attack detectors that miss well-known attacks when used in practice. This is most likely due to the LLM lacking knowledge about some existing attacks and to the generated code being not evaluated in real usage scenarios. We propose a novel approach integrating Retrieval Augmented Generation (RAG) and Self-Ranking into the LLM pipeline. RAG enhances the robustness of the output by incorporating external knowledge sources, while the Self-Ranking technique, inspired to the concept of Self-Consistency, generates multiple reasoning paths and creates ranks to select the most robust detector. Our extensive empirical study targets code generated by LLMs to detect two prevalent injection attacks in web security: Cross-Site Scripting (XSS) and SQL injection (SQLi). Results show a significant improvement in detection performance compared to baselines, with an increase of up to 71%pt and 37%pt in the F2-Score for XSS and SQLi detection, respectively.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robustness of LLM-generated security attack detectors
Improving detection of prevalent web injection attacks like XSS and SQLi
Addressing knowledge gaps in LLMs for specific security requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating RAG to enhance external knowledge
Using Self-Ranking for multiple reasoning paths
Improving detector robustness through combined techniques
🔎 Similar Papers
No similar papers found.