SBAN: A Framework & Multi-Dimensional Dataset for Large Language Model Pre-Training and Software Code Mining

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Software analysis tasks suffer from fragmented multimodal data and challenges in cross-representation learning. Method: This paper introduces SBAN, the first large-scale, four-modal aligned software dataset unifying source code, binaries, assembly instructions, and natural language descriptions. We propose a cross-modal alignment framework grounded in static analysis and multi-source parsing to ensure semantic consistency and structural extensibility. Contribution/Results: SBAN comprises over 3 million samples spanning benign and malicious software, supporting both pretraining and downstream evaluation. Experiments demonstrate significant performance gains for large language models on code translation, explanation, vulnerability detection, and malware identification. SBAN establishes a foundational resource for joint modeling of code intelligence and security analysis, enabling robust, multimodal software understanding.

Technology Category

Application Category

📝 Abstract
This paper introduces SBAN (Source code, Binary, Assembly, and Natural Language Description), a large-scale, multi-dimensional dataset designed to advance the pre-training and evaluation of large language models (LLMs) for software code analysis. SBAN comprises more than 3 million samples, including 2.9 million benign and 672,000 malware respectively, each represented across four complementary layers: binary code, assembly instructions, natural language descriptions, and source code. This unique multimodal structure enables research on cross-representation learning, semantic understanding of software, and automated malware detection. Beyond security applications, SBAN supports broader tasks such as code translation, code explanation, and other software mining tasks involving heterogeneous data. It is particularly suited for scalable training of deep models, including transformers and other LLM architectures. By bridging low-level machine representations and high-level human semantics, SBAN provides a robust foundation for building intelligent systems that reason about code. We believe that this dataset opens new opportunities for mining software behavior, improving security analytics, and enhancing LLM capabilities in pre-training and fine-tuning tasks for software code mining.
Problem

Research questions and friction points this paper is trying to address.

Advancing LLM pre-training for software code analysis
Enabling cross-representation learning and semantic understanding
Supporting automated malware detection and code translation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal dataset with four code representation layers
Enables cross-representation learning for software analysis
Supports scalable training of transformer-based language models
🔎 Similar Papers
No similar papers found.