🤖 AI Summary
Binary code lacks semantic information, severely hindering reverse engineering and security analysis in source-unavailable scenarios. To address this, we propose ContraBin—the first pretraining framework for binary understanding that jointly leverages source code, binary code, and comments via ternary contrastive learning. Our key contributions are: (1) a novel tri-modal contrastive architecture aligning source, binary, and comment representations; (2) empirical discovery that synthetically generated comments outperform human-written ones, coupled with simplex interpolation to enhance representation robustness; and (3) integration of intermediate representation modeling and cross-modal embedding alignment. Evaluated on four downstream tasks—function classification, function name recovery, code summarization, and reverse engineering—ContraBin consistently surpasses state-of-the-art methods, achieving significant improvements in accuracy, mean average precision (mAP), and BLEU score. The framework establishes a scalable, multimodal representation learning paradigm for binary code understanding.
📝 Abstract
Binary code analysis and comprehension is critical to applications in reverse engineering and computer security tasks where source code is not available. Unfortunately, unlike source code, binary code lacks semantics and is more difficult for human engineers to understand and analyze. In this paper, we present ContraBin, a contrastive learning technique that integrates source code and comment information along with binaries to create an embedding capable of aiding binary analysis and comprehension tasks. Specifically, we present three components in ContraBin: (1) a primary contrastive learning method for initial pre-training, (2) a simplex interpolation method to integrate source code, comments, and binary code, and (3) an intermediate representation learning algorithm to train a binary code embedding. We further analyze the impact of human-written and synthetic comments on binary code comprehension tasks, revealing a significant performance disparity. While synthetic comments provide substantial benefits, human-written comments are found to introduce noise, even resulting in performance drops compared to using no comments. These findings reshape the narrative around the role of comment types in binary code analysis. We evaluate the effectiveness of ContraBin through four indicative downstream tasks related to binary code: algorithmic functionality classification, function name recovery, code summarization, and reverse engineering. The results show that ContraBin considerably improves performance on all four tasks, measured by accuracy, mean of average precision, and BLEU scores as appropriate. ContraBin is the first language representation model to incorporate source code, binary code, and comments into contrastive code representation learning and is intended to contribute to the field of binary code analysis. The dataset used in this study is available for further research.