When AIOps Become "AI Oops": Subverting LLM-driven IT Operations via Telemetry Manipulation

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel threat to LLM-based intelligent operations (AIOps) agents: adversaries can launch adversarial deception attacks by manipulating telemetry data to induce destructive infrastructure actions. Method: We propose AIOpsDoom, the first automated attack framework that integrates telemetry reconnaissance, LLM-driven adversarial input generation, and structured fuzz testing—enabling successful hijacking of mainstream AIOps agents without prior knowledge. To counter this threat, we design AIOpsShield, a lightweight defense mechanism leveraging structured data sanitization for precise anomaly filtering. Contribution/Results: Experiments across multiple real-world AIOps scenarios show AIOpsDoom achieves >92% attack success rate, while AIOpsShield blocks 100% of such attacks with negligible observability overhead. This work establishes a new paradigm for security evaluation and protection of LLM-augmented AIOps systems.

Technology Category

Application Category

📝 Abstract
AI for IT Operations (AIOps) is transforming how organizations manage complex software systems by automating anomaly detection, incident diagnosis, and remediation. Modern AIOps solutions increasingly rely on autonomous LLM-based agents to interpret telemetry data and take corrective actions with minimal human intervention, promising faster response times and operational cost savings. In this work, we perform the first security analysis of AIOps solutions, showing that, once again, AI-driven automation comes with a profound security cost. We demonstrate that adversaries can manipulate system telemetry to mislead AIOps agents into taking actions that compromise the integrity of the infrastructure they manage. We introduce techniques to reliably inject telemetry data using error-inducing requests that influence agent behavior through a form of adversarial reward-hacking; plausible but incorrect system error interpretations that steer the agent's decision-making. Our attack methodology, AIOpsDoom, is fully automated--combining reconnaissance, fuzzing, and LLM-driven adversarial input generation--and operates without any prior knowledge of the target system. To counter this threat, we propose AIOpsShield, a defense mechanism that sanitizes telemetry data by exploiting its structured nature and the minimal role of user-generated content. Our experiments show that AIOpsShield reliably blocks telemetry-based attacks without affecting normal agent performance. Ultimately, this work exposes AIOps as an emerging attack vector for system compromise and underscores the urgent need for security-aware AIOps design.
Problem

Research questions and friction points this paper is trying to address.

Analyzing security risks in AI-driven IT operations systems
Demonstrating telemetry manipulation to mislead AIOps agents
Proposing defense mechanisms against AIOps telemetry attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated telemetry data injection for adversarial influence
AIOpsShield sanitizes structured telemetry data
LLM-driven adversarial input generation without system knowledge
🔎 Similar Papers
No similar papers found.