LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-source AI libraries pose latent risks across security, licensing, maintainability, supply-chain integrity, and regulatory compliance—risks inadequately captured by existing assessment frameworks. To address this gap, we propose the first governance-alignment evaluation paradigm tailored to the AI supply chain. Our method employs a LangGraph-based multi-agent graph architecture that enables source-code-level, five-dimensional (security, licensing, maintenance, supply chain, compliance) coordinated risk assessment and quantification. The system integrates knowledge-graph-based provenance tracing, OpenSSF Scorecard-compatible mapping, and automated detection of SBOMs, telemetry, and compliance documentation, while supporting longitudinal ecosystem monitoring. Evaluated on 20 mainstream AI libraries, it covers 88% of OpenSSF checks and identifies, on average, 19 additional hidden risk categories—including remote code execution vulnerabilities, license conflicts, and missing audit documentation. We publicly release a traceable, auditable risk leaderboard.

Technology Category

Application Category

📝 Abstract
Open-source AI libraries are foundational to modern AI systems but pose significant, underexamined risks across security, licensing, maintenance, supply chain integrity, and regulatory compliance. We present LibVulnWatch, a graph-based agentic assessment framework that performs deep, source-grounded evaluations of these libraries. Built on LangGraph, the system coordinates a directed acyclic graph of specialized agents to extract, verify, and quantify risk using evidence from trusted sources such as repositories, documentation, and vulnerability databases. LibVulnWatch generates reproducible, governance-aligned scores across five critical domains, publishing them to a public leaderboard for longitudinal ecosystem monitoring. Applied to 20 widely used libraries, including ML frameworks, LLM inference engines, and agent orchestration tools, our system covers up to 88% of OpenSSF Scorecard checks while uncovering up to 19 additional risks per library. These include critical Remote Code Execution (RCE) vulnerabilities, absent Software Bills of Materials (SBOMs), licensing constraints, undocumented telemetry, and widespread gaps in regulatory documentation and auditability. By translating high-level governance principles into practical, verifiable metrics, LibVulnWatch advances technical AI governance with a scalable, transparent mechanism for continuous supply chain risk assessment and informed library selection.
Problem

Research questions and friction points this paper is trying to address.

Identifying hidden vulnerabilities in open-source AI libraries
Assessing risks in security, licensing, and regulatory compliance
Providing continuous supply chain risk evaluation for AI libraries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based agentic framework for AI library assessment
Specialized agents extract and verify risks from trusted sources
Generates reproducible governance scores for ecosystem monitoring
🔎 Similar Papers
No similar papers found.
Zekun Wu
Zekun Wu
Research Scientist, Holistic AI / PhD Student, University College London
Agentic AIResponsible AIBehavioural RobustnessExplainabilityInterpretability
Seonglae Cho
Seonglae Cho
University College London
Mechanistic InterpretabilityLanguage ModelingAI Alignment
U
Umar Mohammed
Holistic AI
C
Cristian Munoz
Holistic AI
K
K. Costa
Holistic AI
Xin Guan
Xin Guan
Research, Holistic AI
Ethical AI and Normative Reasoning
T
Theo King
Holistic AI
Z
Ze Wang
Holistic AI, University College London
E
Emre Kazim
Holistic AI, University College London
A
Adriano Koshiyama
Holistic AI