🤖 AI Summary
Existing LLM-based decompilation methods treat assembly code as a linear token sequence, neglecting jump semantics and data sections—leading to limited semantic reconstruction capability. To address this, we propose Source-level Abstract Logic Trees (SALT), the first explicit representation modeling stable high-level logical structures inherent in binaries. SALT jointly encodes control flow, data dependencies, and jump semantics to guide LLMs in precise program logic understanding. Our end-to-end, semantics-aware decompilation framework integrates static analysis for SALT construction, LLM fine-tuning, error correction, and symbolic recovery. Evaluated on three benchmarks including Decompile-Eval, our method achieves state-of-the-art performance: TCP improves to 70.4% (+10.6%), with strong robustness against four prevalent obfuscation techniques. User studies confirm that outputs align more closely with source-code intuition and significantly enhance manual analysis efficiency.
📝 Abstract
Decompilation is widely used in reverse engineering to recover high-level language code from binary executables. While recent approaches leveraging Large Language Models (LLMs) have shown promising progress, they typically treat assembly code as a linear sequence of instructions, overlooking arbitrary jump patterns and isolated data segments inherent to binary files. This limitation significantly hinders their ability to correctly infer source code semantics from assembly code. To address this limitation, we propose saltm, a novel binary decompilation method that abstracts stable logical features shared between binary and source code. The core idea of saltm is to abstract selected binary-level operations, such as specific jumps, into a high-level logic framework that better guides LLMs in semantic recovery. Given a binary function, saltm constructs a Source-level Abstract Logic Tree (salt) from assembly code to approximate the logic structure of high-level language. It then fine-tunes an LLM using the reconstructed salt to generate decompiled code. Finally, the output is refined through error correction and symbol recovery to improve readability and correctness. We compare saltm to three categories of baselines (general-purpose LLMs, commercial decompilers, and decompilation methods) using three well-known datasets (Decompile-Eval, MBPP, Exebench). Our experimental results demonstrate that saltm is highly effective in recovering the logic of the source code, significantly outperforming state-of-the-art methods (e.g., 70.4% TCP rate on Decompile-Eval with a 10.6% improvement). The results further validate its robustness against four commonly used obfuscation techniques. Additionally, analyses of real-world software and a user study confirm that our decompiled output offers superior assistance to human analysts in comprehending binary functions.