On the Role of Pre-trained Embeddings in Binary Code Analysis

📅 2024-07-01
🏛️ ACM Asia Conference on Computer and Communications Security
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The necessity and applicability boundaries of pretrained assembly embeddings in binary code analysis remain unclear. Method: We conduct a systematic empirical study using a Debian corpus of 1.2 million functions, evaluating five downstream binary analysis tasks to compare pretrained embeddings (e.g., CodeBERT, GraphCodeBERT) against end-to-end supervised learning. Contribution/Results: When sufficient labeled data is available, end-to-end methods consistently outperform pretrained embeddings by a statistically significant margin, with negligible performance differences among embedding variants. Pretrained embeddings only yield clear advantages under extremely low-resource settings (<1% labeled data). Crucially, this work introduces the first quantitative decision criterion—based on labeled-data availability and task characteristics—to guide whether or not to adopt pretrained embeddings. Our findings provide empirical evidence and practical guidelines for selecting representation-learning paradigms in binary analysis, challenging the default assumption that pretrained embeddings are universally beneficial.

Technology Category

Application Category

📝 Abstract
Deep learning has enabled remarkable progress in binary code analysis. In particular, pre-trained embeddings of assembly code have become a gold standard for solving analysis tasks, such as measuring code similarity or recognizing functions. These embeddings are capable of learning a vector representation from unlabeled code. In contrast to natural language processing, however, label information is not scarce for many tasks in binary code analysis. For example, labeled training data for function boundaries, optimization levels, and argument types can be easily derived from debug information provided by a compiler. Consequently, the main motivation of embeddings does not transfer directly to binary code analysis. In this paper, we explore the role of pre-trained embeddings from a critical perspective. To this end, we systematically evaluate recent embeddings for assembly code on five downstream tasks using a corpus of 1.2 million functions from the Debian distribution. We observe that several embeddings perform similarly when sufficient labeled data is available, and that differences reported in prior work are hardly noticeable. Surprisingly, we find that end-to-end learning without pre-training performs best on average, which calls into question the need for specialized embeddings. By varying the amount of labeled data, we eventually derive guidelines for when embeddings offer advantages and when end-to-end learning is preferable for binary code analysis.
Problem

Research questions and friction points this paper is trying to address.

Evaluate pre-trained embeddings in binary code analysis
Compare embeddings with end-to-end learning methods
Determine optimal conditions for using embeddings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained embeddings evaluate
End-to-end learning outperforms
Guidelines for embedding usage
🔎 Similar Papers
No similar papers found.