ShieldedCode: Learning Robust Representations for Virtual Machine Protected Code

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first representation learning framework tailored for virtual machine (VM)-based code protection, addressing the limitations of traditional rule-based obfuscation techniques, which are costly and vulnerable to automated reverse engineering. The authors construct a large-scale paired dataset of source and protected code and introduce a novel approach that jointly optimizes language modeling through hierarchical instruction dependency modeling, function-protection dual-aware contrastive learning, and two-stage continual pretraining. By incorporating a protection-aware contrastive objective and a quantifiable protection effectiveness ranking task, this study pioneers a learning-driven paradigm for software defense. Experimental results demonstrate that the method achieves a Pass@1 accuracy of 26.95% on L0 VM code generation, outperforming GPT-4o (22.58%), and improves Recall@1 by 10% in binary similarity detection, significantly surpassing existing approaches such as jTrans.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved remarkable progress in code generation, yet their potential for software protection remains largely untapped. Reverse engineering continues to threaten software security, while traditional virtual machine protection (VMP) relies on rigid, rule-based transformations that are costly to design and vulnerable to automated analysis. In this work, we present the first protection-aware framework that learns robust representations of VMP-protected code. Our approach builds large-scale paired datasets of source code and normalized VM implementations, and introduces hierarchical dependency modeling at intra-, preceding-, and inter-instruction levels. We jointly optimize language modeling with functionality-aware and protection-aware contrastive objectives to capture both semantic equivalence and protection strength. To further assess resilience, we propose a protection effectiveness optimization task that quantifies and ranks different VM variants derived from the same source. Coupled with a two-stage continual pre-training and fine-tuning pipeline, our method enables models to generate, compare, and reason over protected code. Extensive experiments show that our framework significantly improves robustness across diverse protection levels, opening a new research direction for learning-based software defense. In this work, we present ShieldedCode, the first protection-aware framework that learns robust representations of VMP-protected code. Our method achieves 26.95% Pass@1 on L0 VM code generation compared to 22.58% for GPT-4o., and improves binary similarity detection Recall@1 by 10% over state of art methods like jTrans.
Problem

Research questions and friction points this paper is trying to address.

virtual machine protection
reverse engineering
code representation
software security
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

virtual machine protection
protection-aware representation learning
contrastive learning
code obfuscation
binary similarity detection
M
Mingqiao Mo
University of Chinese Academy of Sciences
Y
Yunlong Tan
University of Chinese Academy of Sciences
Hao Zhang
Hao Zhang
Division for Theoretical Physics, Institute of High Energy Physics, Chinese Academy of Sciences
High Energy Physics
H
Heng Zhang
South China Normal University
Yangfan He
Yangfan He
University of Minnesota - Twin Cities
AI AgentReasoningAI AlignmentFoundation Models