XGen-Q: An Explainable Domain-Adaptive LLM Framework with Retrieval-Augmented Generation for Software Security

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing malware detection models suffer from poor generalizability against code obfuscation and zero-day threats, as well as limited interpretability. To address these challenges, this paper proposes a domain-adapted large language model (LLM) framework specifically designed for malicious code understanding. Methodologically, we innovatively integrate retrieval-augmented generation (RAG), multi-stage prompt engineering, and diverse obfuscation-aware training strategies. Building upon the Qwen-Coder architecture, the model undergoes domain-specific pretraining and fine-tuning using over one million real-world malware samples. Experimental results demonstrate a significant reduction in perplexity, high-accuracy detection on previously unseen obfuscated samples, and fine-grained behavioral attribution. The framework achieves strong generalization across obfuscation variants while providing human-interpretable reasoning—enabling reliable threat identification and forensic analysis.

Technology Category

Application Category

📝 Abstract
Generative AI and large language models (LLMs) have shown strong capabilities in code understanding, but their use in cybersecurity, particularly for malware detection and analysis, remains limited. Existing detection systems often fail to generalize to obfuscated or previously unseen threats, underscoring the need for more adaptable and explainable models. To address this challenge, we introduce XGen-Q, a domain-adapted LLM built on the Qwen-Coder architecture and pretrained on a large-scale corpus of over one million malware samples, spanning both source and assembly code. XGen-Q uses a multi-stage prompt strategy combined with retrieval-augmented generation (RAG) to deliver reliable malware identification and detailed forensic reporting, even in the presence of complex code obfuscation. To further enhance generalization, we design a training pipeline that systematically exposes the model to diverse obfuscation patterns. Experimental results show that XGen-Q achieves significantly lower perplexity than competitive baselines and exhibits strong performance on novel malware samples, demonstrating the promise of LLM-based approaches for interpretable and robust malware analysis.
Problem

Research questions and friction points this paper is trying to address.

Detecting malware with obfuscated or unseen threats
Improving generalization and adaptability of detection systems
Providing interpretable and robust malware analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain-adapted LLM pretrained on million malware samples
Multi-stage prompt strategy with retrieval-augmented generation
Training pipeline exposing model to diverse obfuscation patterns
🔎 Similar Papers
No similar papers found.