Software Vulnerability Management in the Era of Artificial Intelligence: An Industry Perspective

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the industrial adoption status, barriers, and optimization pathways of AI in Software Vulnerability Management (SVM) across its full lifecycle. Employing a mixed-methods empirical survey—comprising questionnaires and in-depth interviews with 60 practitioners from 60 countries—it introduces, for the first time, a “socio-technical co-adoption paradigm” specific to SVM, uncovering how human-AI collaboration is constrained by organizational governance and process structures. Results indicate that while 69% of users report basic satisfaction with current AI tools, three critical bottlenecks persist: high false-positive rates, weak contextual awareness, and insufficient trustworthiness. To address these, the study proposes a four-dimensional industrial adoption framework—centered on explainability, contextual awareness, workflow integration, and verification capability—and derives 12 actionable best practices spanning development, deployment, and governance phases.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence (AI) has revolutionized software development, particularly by automating repetitive tasks and improving developer productivity. While these advancements are well-documented, the use of AI-powered tools for Software Vulnerability Management (SVM), such as vulnerability detection and repair, remains underexplored in industry settings. To bridge this gap, our study aims to determine the extent of the adoption of AI-powered tools for SVM, identify barriers and facilitators to the use, and gather insights to help improve the tools to meet industry needs better. We conducted a survey study involving 60 practitioners from diverse industry sectors across 27 countries. The survey incorporates both quantitative and qualitative questions to analyze the adoption trends, assess tool strengths, identify practical challenges, and uncover opportunities for improvement. Our findings indicate that AI-powered tools are used throughout the SVM life cycle, with 69% of users reporting satisfaction with their current use. Practitioners value these tools for their speed, coverage, and accessibility. However, concerns about false positives, missing context, and trust issues remain prevalent. We observe a socio-technical adoption pattern in which AI outputs are filtered through human oversight and organizational governance. To support safe and effective use of AI for SVM, we recommend improvements in explainability, contextual awareness, integration workflows, and validation practices. We assert that these findings can offer practical guidance for practitioners, tool developers, and researchers seeking to enhance secure software development through the use of AI.
Problem

Research questions and friction points this paper is trying to address.

Investigates AI tool adoption for software vulnerability management in industry.
Identifies barriers and facilitators to AI use in vulnerability detection and repair.
Recommends improvements for AI tools to meet industry needs effectively.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surveyed industry adoption of AI vulnerability tools
Identified socio-technical adoption with human oversight
Recommended improvements in explainability and contextual awareness
🔎 Similar Papers
No similar papers found.