🤖 AI Summary
Large language models (LLMs) exhibit significant limitations in semantic understanding of low-level programming languages—particularly assembly code. To address this, we propose a fully automated, annotation-free synthetic data generation method: leveraging executable C programs to produce semantically aligned C–assembly bilingual instruction pairs, enabling cross-layer code knowledge transfer. Our approach supports instruction fine-tuning and is compatible with diverse LLM architectures and parameter scales. Evaluated on binary code summarization and vulnerability detection tasks, fine-tuned models achieve substantial improvements—average BLEU-4 score increases by 12.7% and F1 score by 9.3%—with consistent generalization across model families. The core contribution is the first systematic paradigm for generating high-quality, semantically faithful assembly explanations directly from executable code, eliminating reliance on manual annotation while preserving precise program semantics.
📝 Abstract
Large Language Models (LLMs) typically excel at coding tasks involving high-level programming languages, as opposed to lower-level programming languages, such as assembly. We propose a synthetic data generation method named C-ing Clearly, which leverages the corresponding C code to enhance an LLM's understanding of assembly. By fine-tuning on data generated through our method, we demonstrate improved LLM performance for binary code summarization and vulnerability detection. Our approach demonstrates consistent gains across different LLM families and model sizes.