A Small Leak Sinks All: Exploring the Transferable Vulnerability of Source Code Models

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals cross-model transferable adversarial vulnerabilities between source code models (SCMs) and code large language models (LLM4Code), posing significant threats to the security and trustworthiness of AI-driven software ecosystems. To address this, we propose HABITAT—the first black-box, target-model-agnostic cross-model attack framework—leveraging a customized perturbation insertion mechanism and a hierarchical reinforcement learning strategy to adaptively generate highly transferable adversarial examples. Experiments demonstrate that adversarial samples crafted solely from conventional SCMs achieve a 64% attack success rate against LLM4Code, surpassing the state-of-the-art by over 15%. This is the first systematic empirical validation of deep representational alignment in vulnerability encoding between SCMs and LLM4Code. Moreover, HABITAT establishes a general, scalable analytical paradigm for code-AI security assessment, enabling robust, model-agnostic evaluation of adversarial robustness across diverse code intelligence systems.

Technology Category

Application Category

📝 Abstract
Source Code Model learn the proper embeddings from source codes, demonstrating significant success in various software engineering or security tasks. The recent explosive development of LLM extends the family of SCMs,bringing LLMs for code that revolutionize development workflows. Investigating different kinds of SCM vulnerability is the cornerstone for the security and trustworthiness of AI-powered software ecosystems, however, the fundamental one, transferable vulnerability, remains critically underexplored. Existing studies neither offer practical ways, i.e. require access to the downstream classifier of SCMs, to produce effective adversarial samples for adversarial defense, nor give heed to the widely used LLM4Code in modern software development platforms and cloud-based integrated development environments. Therefore, this work systematically studies the intrinsic vulnerability transferability of both traditional SCMs and LLM4Code, and proposes a victim-agnostic approach to generate practical adversarial samples. We design HABITAT, consisting of a tailored perturbation-inserting mechanism and a hierarchical Reinforcement Learning framework that adaptively selects optimal perturbations without requiring any access to the downstream classifier of SCMs. Furthermore, an intrinsic transferability analysis of SCM vulnerabilities is conducted, revealing the potential vulnerability correlation between traditional SCMs and LLM4Code, together with fundamental factors that govern the success rate of victim-agnostic transfer attacks. These findings of SCM vulnerabilities underscore the critical focal points for developing robust defenses in the future. Experimental evaluation demonstrates that our constructed adversarial examples crafted based on traditional SCMs achieve up to 64% success rates against LLM4Code, surpassing the state-of-the-art by over 15%.
Problem

Research questions and friction points this paper is trying to address.

Investigating transferable vulnerabilities in source code models and LLMs for code
Developing victim-agnostic adversarial samples without accessing downstream classifiers
Analyzing vulnerability correlations between traditional SCMs and modern LLM4Code systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Reinforcement Learning framework for adaptive perturbations
Victim-agnostic approach generating adversarial samples without classifier access
Tailored perturbation-inserting mechanism analyzing vulnerability transferability
🔎 Similar Papers
No similar papers found.