CREBench: Evaluating Large Language Models in Cryptographic Binary Reverse Engineering

๐Ÿ“… 2026-04-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the critical yet underexplored role of large language models (LLMs) in cryptographic binary reverse engineeringโ€”a task essential for vulnerability discovery and malware analysis but traditionally reliant on expert knowledge. To systematically evaluate LLM capabilities in this domain, the authors introduce CREBench, the first benchmark specifically designed for this purpose, comprising 432 CTF-style challenges that span 48 cryptographic algorithms, three categories of insecure key usage scenarios, and three difficulty levels. They further propose a four-stage evaluation framework to comprehensively assess LLM performance from algorithm identification to correct input recovery. Among eight state-of-the-art LLMs tested, GPT-5.4 achieves the highest score (64.03/100, solving 59% of challenges), yet still falls significantly short of human experts, who attain a baseline score of 92.19.
๐Ÿ“ Abstract
Reverse engineering (RE) is central to software security, particularly for cryptographic programs that handle sensitive data and are highly prone to vulnerabilities. It supports critical tasks such as vulnerability discovery and malware analysis. Despite its importance, RE remains labor-intensive and requires substantial expertise, making large language models (LLMs) a potential solution for automating the process. However, their capabilities for RE remain systematically underexplored. To address this gap, we study the cryptographic binary RE capabilities of LLMs and introduce \textbf{CREBench}, a benchmark comprising 432 challenges built from 48 standard cryptographic algorithms, 3 insecure crypto key usage scenarios, and 3 difficulty levels. Each challenge follows a Capture-the-Flag (CTF) RE challenge, requiring the model to analyze the underlying cryptographic logic and recover the correct input. We design an evaluation framework comprising four sub-tasks, from algorithm identification to correct flag recovery. We evaluate eight frontier LLMs on CREBench. GPT-5.4, the best-performing model, achieves 64.03 out of 100 and recovers the flag in 59\% of challenges. We also establish a strong human expert baseline of 92.19 points, showing that humans maintain an advantage in cryptographic RE tasks. Our code and dataset are available at https://github.com/wangyu-ovo/CREBench.
Problem

Research questions and friction points this paper is trying to address.

Reverse Engineering
Large Language Models
Cryptographic Binary
Benchmarking
CTF Challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cryptographic Reverse Engineering
Large Language Models
Benchmarking
Binary Analysis
Capture-the-Flag
๐Ÿ”Ž Similar Papers
No similar papers found.
Baicheng Chen
Baicheng Chen
University of California San Diego
MetasurfaceMetamaterialWireless SensingMobile HealthSecurity/Privacy
Yu Wang
Yu Wang
Shanghai Jiao Tong University & Shanghai AI Laboratory
Natural Language ProcessingSpeech and Language ProcessingLarge Language Model
Z
Ziheng Zhou
Institute of Interdisciplinary Information Sciences, Tsinghua University
X
Xiangru Liu
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
Juanru Li
Juanru Li
Shanghai Jiao Tong university
Computer Security
Y
Yilei Chen
Institute of Interdisciplinary Information Sciences, Tsinghua University; Shanghai Qi Zhi Institute; Xiongan AI Institute
Tianxing He
Tianxing He
Tsinghua University
NLP