Credential Leakage in LLM Agent Skills: A Large-Scale Empirical Study

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the underappreciated risk of sensitive credential leakage in third-party skills integrated by large language model (LLM) agents, which often execute in privileged environments. We introduce the first comprehensive taxonomy encompassing four categories of accidental and six categories of adversarial leakage patterns, revealing that credential exposure exhibits cross-modal characteristics primarily due to debugging logs being inadvertently disclosed to the LLM. To systematically detect such vulnerabilities, we propose an end-to-end pipeline combining static analysis, natural language understanding, sandboxed execution, and manual validation. Applying this framework to a large-scale empirical study of 17,022 skills, we identify 520 vulnerable skills containing 1,708 distinct issues, 89.6% of which involve credentials exploitable without additional privileges. Following responsible disclosure, all malicious skills were removed from repositories, and 91.6% of hardcoded credentials were remediated.
📝 Abstract
Third-party skills extend LLM agents with powerful capabilities but often handle sensitive credentials in privileged environments, making leakage risks poorly understood. We present the first large-scale empirical study of this problem, analyzing 17,022 skills (sampled from 170,226 on SkillsMP) using static analysis, sandbox testing, and manual inspection. We identify 520 vulnerable skills with 1,708 issues and derive a taxonomy of 10 leakage patterns (4 accidental and 6 adversarial). We find that (1) leakage is fundamentally cross-modal: 76.3% require joint analysis of code and natural language, while 3.1% arise purely from prompt injection; (2) debug logging is the primary vector, with print and console.log causing 73.5% of leaks due to stdout exposure to LLMs; and (3) leaked credentials are both exploitable (89.6% without privileges) and persistent, as forks retain secrets even after upstream fixes. After disclosure, all malicious skills were removed and 91.6% of hardcoded credentials were fixed. We release our dataset, taxonomy, and detection pipeline to support future research.
Problem

Research questions and friction points this paper is trying to address.

Credential Leakage
LLM Agent Skills
Third-party Skills
Security Vulnerability
Privileged Environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

credential leakage
LLM agents
third-party skills
cross-modal analysis
debug logging
🔎 Similar Papers
No similar papers found.