Can LLMs Deobfuscate Binary Code? A Systematic Analysis of Large Language Models into Pseudocode Deobfuscation

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Binary code obfuscation poses a significant barrier to reverse engineering, and existing approaches struggle to recover the underlying semantic logic. This work proposes BinDeObfBench, the first evaluation benchmark encompassing multi-stage obfuscation techniques applied before, during, and after compilation, to systematically assess the performance of large language models (LLMs) on binary deobfuscation tasks. The study reveals that reasoning capabilities and domain-specific knowledge are more critical than model scale alone. Task-oriented fine-tuning substantially outperforms generic pretraining, while in-context learning benefits standard models but offers limited gains for reasoning-focused architectures. Experimental results demonstrate that fine-tuned reasoning-oriented LLMs exhibit strong generalization and robustness across diverse settings, including heavily obfuscated binaries, cross-instruction-set architectures, and varying compiler optimization levels.
📝 Abstract
Deobfuscating binary code remains a fundamental challenge in reverse engineering, as obfuscation is widely used to hinder analysis and conceal program logic. Although large language models (LLMs) have shown promise in recovering semantics from obfuscated binaries, a systematic evaluation of their effectiveness is still lacking. In this work, we present BinDeObfBench, the first comprehensive benchmark for assessing LLM-based binary deobfuscation across diverse transformations spanning pre-compilation, compile-time, and post-compilation stages. Our evaluation shows that deobfuscation performance depends more on reasoning capability and domain expertise than on model scale, and that task-specific supervised fine-tuning consistently outperforms broad domain pre-training. Reasoning models can maintain robustness under severe obfuscation, generalize across different instruction set architectures (ISAs) and optimization levels. In-context learning benefits standard models but yields limited gains for reasoning models. Overall, our study highlights the importance of task-specific fine-tuning and reasoning-driven strategies, and positions BinDeObfBench as a basis for future work in binary deobfuscation.
Problem

Research questions and friction points this paper is trying to address.

binary deobfuscation
reverse engineering
code obfuscation
program semantics recovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

binary deobfuscation
large language models
reasoning models
task-specific fine-tuning
BinDeObfBench
🔎 Similar Papers
No similar papers found.