A Framework for Rapidly Developing and Deploying Protection Against Large Language Model Attacks

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face escalating zero-day and novel adversarial attacks, yet existing defenses lack real-time adaptability. Method: This paper introduces the first end-to-end automated defense system for LLM security, pioneering the adaptation of mature malware defense paradigms to the LLM domain. It integrates threat-intelligence-driven detection, multi-source data enrichment, MLOps-based real-time monitoring, and automated model hot-updating. Contribution/Results: The system achieves minute-scale attack response and seamless, production-grade defense upgrades without service interruption, enabling multi-layered dynamic protection and continuous model optimization. Deployed in real-world production environments, it significantly accelerates detection and response to emerging threats. By unifying operational security practices with LLM-specific adaptation mechanisms, it delivers a scalable, self-adaptive, and engineering-ready solution for LLM security.

Technology Category

Application Category

📝 Abstract
The widespread adoption of Large Language Models (LLMs) has revolutionized AI deployment, enabling autonomous and semi-autonomous applications across industries through intuitive language interfaces and continuous improvements in model development. However, the attendant increase in autonomy and expansion of access permissions among AI applications also make these systems compelling targets for malicious attacks. Their inherent susceptibility to security flaws necessitates robust defenses, yet no known approaches can prevent zero-day or novel attacks against LLMs. This places AI protection systems in a category similar to established malware protection systems: rather than providing guaranteed immunity, they minimize risk through enhanced observability, multi-layered defense, and rapid threat response, supported by a threat intelligence function designed specifically for AI-related threats. Prior work on LLM protection has largely evaluated individual detection models rather than end-to-end systems designed for continuous, rapid adaptation to a changing threat landscape. We present a production-grade defense system rooted in established malware detection and threat intelligence practices. Our platform integrates three components: a threat intelligence system that turns emerging threats into protections; a data platform that aggregates and enriches information while providing observability, monitoring, and ML operations; and a release platform enabling safe, rapid detection updates without disrupting customer workflows. Together, these components deliver layered protection against evolving LLM threats while generating training data for continuous model improvement and deploying updates without interrupting production.
Problem

Research questions and friction points this paper is trying to address.

Preventing zero-day and novel attacks targeting large language models
Lacking end-to-end systems for rapid adaptation to evolving threats
Balancing robust AI protection with continuous deployment requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Threat intelligence system converts emerging threats into protections
Data platform aggregates information for observability and ML operations
Release platform enables rapid detection updates without workflow disruption
🔎 Similar Papers
No similar papers found.
A
Adam Swanda
Cisco
A
Amy Chang
Cisco
A
Alexander Chen
Cisco
F
Fraser Burch
Cisco
Paul Kassianik
Paul Kassianik
AI Security Researcher
AI security
Konstantin Berlin
Konstantin Berlin
Cisco
Machine LearningCybersecurityBioinformatics