文献情報
- 作者
- Zekun Wu,Seonglae Cho,Umar Mohammed,Cristian Munoz,Kleyton Costa,Xin Guan,Theo King,Ze Wang,Emre Kazim,Adriano Koshiyama
- 公開日
- 2025-5-13
- 更新日
- 2025-7-1
- 所属機関
- Holistic AI
- 所属の国
- United Kingdom
- 会議名
Abstract
Open-source AI libraries are foundational to modern AI systems, yet they
present significant, underexamined risks spanning security, licensing,
maintenance, supply chain integrity, and regulatory compliance. We introduce
LibVulnWatch, a system that leverages recent advances in large language models
and agentic workflows to perform deep, evidence-based evaluations of these
libraries. Built on a graph-based orchestration of specialized agents, the
framework extracts, verifies, and quantifies risk using information from
repositories, documentation, and vulnerability databases. LibVulnWatch produces
reproducible, governance-aligned scores across five critical domains,
publishing results to a public leaderboard for ongoing ecosystem monitoring.
Applied to 20 widely used libraries, including ML frameworks, LLM inference
engines, and agent orchestration tools, our approach covers up to 88% of
OpenSSF Scorecard checks while surfacing up to 19 additional risks per library,
such as critical RCE vulnerabilities, missing SBOMs, and regulatory gaps. By
integrating advanced language technologies with the practical demands of
software risk assessment, this work demonstrates a scalable, transparent
mechanism for continuous supply chain evaluation and informed library
selection.