Multi-Agent Framework for Threat Mitigation and Resilience in AI-Based Systems

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Foundational AI models deployed in high-stakes domains (e.g., finance, healthcare) face emerging security threats—including data poisoning, model extraction, prompt injection, and black-box jailbreaking—that lack systematic, ML-specific threat modeling across the full AI lifecycle. Method: We propose the first ML-specific threat modeling framework spanning pretraining to inference. Leveraging a multi-agent RAG system, we integrate MITRE ATLAS, AI incident databases, and empirical GitHub/Python ecosystem data to empirically characterize threats. Contribution/Results: We discover novel attack patterns—including commercial LLM API model stealing and preference-driven plain-text jailbreaking—and introduce an ontology-based dynamic threat graph that precisely maps 93 ML-specific threats and their dominant TTPs (e.g., MASTERKEY jailbreak, diffusion backdoors), identifying high-risk vulnerability clusters. Our framework enables adaptive ML security design and fills critical gaps in threat modeling for multimodal systems and RAG architectures.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) underpins foundation models in finance, healthcare, and critical infrastructure, making them targets for data poisoning, model extraction, prompt injection, automated jailbreaking, and preference-guided black-box attacks that exploit model comparisons. Larger models can be more vulnerable to introspection-driven jailbreaks and cross-modal manipulation. Traditional cybersecurity lacks ML-specific threat modeling for foundation, multimodal, and RAG systems. Objective: Characterize ML security risks by identifying dominant TTPs, vulnerabilities, and targeted lifecycle stages. Methods: We extract 93 threats from MITRE ATLAS (26), AI Incident Database (12), and literature (55), and analyze 854 GitHub/Python repositories. A multi-agent RAG system (ChatGPT-4o, temp 0.4) mines 300+ articles to build an ontology-driven threat graph linking TTPs, vulnerabilities, and stages. Results: We identify unreported threats including commercial LLM API model stealing, parameter memorization leakage, and preference-guided text-only jailbreaks. Dominant TTPs include MASTERKEY-style jailbreaking, federated poisoning, diffusion backdoors, and preference optimization leakage, mainly impacting pre-training and inference. Graph analysis reveals dense vulnerability clusters in libraries with poor patch propagation. Conclusion: Adaptive, ML-specific security frameworks, combining dependency hygiene, threat intelligence, and monitoring, are essential to mitigate supply-chain and inference risks across the ML lifecycle.
Problem

Research questions and friction points this paper is trying to address.

Characterizes ML security risks in AI systems
Identifies unreported threats and dominant attack patterns
Proposes adaptive frameworks for lifecycle threat mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent RAG system builds ontology-driven threat graph
Identifies unreported threats like API model stealing and leakage
Proposes adaptive security frameworks with dependency hygiene and monitoring
🔎 Similar Papers
No similar papers found.