Cross-Domain Evaluation of Transformer-Based Vulnerability Detection on Open & Industry Data

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two critical challenges: the poor cross-domain generalization of Transformer-based models (e.g., CodeBERT) from open-source to industrial settings, and the practical difficulty of deploying AI-powered security tools in real-world development workflows. To bridge this gap, we propose AI-DO—a CI/CD-integrated automated vulnerability detection framework. Methodologically, we systematically evaluate CodeBERT’s cross-dataset performance, apply undersampling to mitigate class imbalance, and embed the fine-tuned model directly into the code review stage for real-time vulnerability localization. Key contributions include: (1) the first empirical demonstration of significant performance degradation when open-source-trained models are applied to industrial code; (2) validation that fine-tuning on open-source data combined with undersampling substantially improves vulnerability detection recall; and (3) real-world pipeline deployment and developer surveys confirming that AI-DO achieves a pragmatic balance between security assurance and development efficiency—establishing a reproducible technical pathway and practical paradigm for transitioning academic models to trustworthy industrial adoption.

Technology Category

Application Category

📝 Abstract
Deep learning solutions for vulnerability detection proposed in academic research are not always accessible to developers, and their applicability in industrial settings is rarely addressed. Transferring such technologies from academia to industry presents challenges related to trustworthiness, legacy systems, limited digital literacy, and the gap between academic and industrial expertise. For deep learning in particular, performance and integration into existing workflows are additional concerns. In this work, we first evaluate the performance of CodeBERT for detecting vulnerable functions in industrial and open-source software. We analyse its cross-domain generalisation when fine-tuned on open-source data and tested on industrial data, and vice versa, also exploring strategies for handling class imbalance. Based on these results, we develop AI-DO(Automating vulnerability detection Integration for Developers' Operations), a Continuous Integration-Continuous Deployment (CI/CD)-integrated recommender system that uses fine-tuned CodeBERT to detect and localise vulnerabilities during code review without disrupting workflows. Finally, we assess the tool's perceived usefulness through a survey with the company's IT professionals. Our results show that models trained on industrial data detect vulnerabilities accurately within the same domain but lose performance on open-source code, while a deep learner fine-tuned on open data, with appropriate undersampling techniques, improves the detection of vulnerabilities.
Problem

Research questions and friction points this paper is trying to address.

Evaluating CodeBERT's cross-domain vulnerability detection performance
Developing CI/CD-integrated recommender system for code review
Assessing industrial applicability of deep learning vulnerability detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned CodeBERT for vulnerability detection
CI/CD-integrated recommender system AI-DO
Handling class imbalance with undersampling techniques
🔎 Similar Papers
No similar papers found.
M
Moritz Mock
Faculty of Engineering, Free University of Bozen-Bolzano, Bolzano, Italy
T
Thomas Forrer
R&D Department, Würth Phoenix, Bolzano, Italy
Barbara Russo
Barbara Russo
Full professor of Computer Science, Free University of Bozen-Bolzano
Software/Systems EngineeringSoftware MeasurementSoftware ReliabilitySoftware TestingTechnology Adoption