ObliInjection: Order-Oblivious Prompt Injection Attack to LLM Agents with Multi-source Data

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of existing prompt injection attacks against LLM agents in multi-source input scenarios—namely, their reliance on strong assumptions about segment ordering—this paper proposes the first order-agnostic robust attack method. The core innovation lies in a novel order-agnostic loss function that jointly models the attack objective across all possible segment permutations, coupled with the orderGCG gradient-based optimization algorithm for efficient adversarial perturbation generation. Unlike prior approaches, our method requires no prior knowledge of segment order and achieves target behavior activation by poisoning only a single segment—even within large inputs comprising 6–100 segments. We evaluate the approach across three cross-domain datasets and 12 state-of-the-art LLMs, demonstrating significantly higher attack success rates than existing methods. To our knowledge, this is the first work to achieve high robustness and low intrusiveness in prompt injection under multi-source, heterogeneous input settings.

Technology Category

Application Category

📝 Abstract
Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended task. In many applications and agents, the input data originates from multiple sources, with each source contributing a segment of the overall input. In these multi-source scenarios, an attacker may control only a subset of the sources and contaminate the corresponding segments, but typically does not know the order in which the segments are arranged within the input. Existing prompt injection attacks either assume that the entire input data comes from a single source under the attacker's control or ignore the uncertainty in the ordering of segments from different sources. As a result, their success is limited in domains involving multi-source data. In this work, we propose ObliInjection, the first prompt injection attack targeting LLM applications and agents with multi-source input data. ObliInjection introduces two key technical innovations: the order-oblivious loss, which quantifies the likelihood that the LLM will complete the attacker-chosen task regardless of how the clean and contaminated segments are ordered; and the orderGCG algorithm, which is tailored to minimize the order-oblivious loss and optimize the contaminated segments. Comprehensive experiments across three datasets spanning diverse application domains and twelve LLMs demonstrate that ObliInjection is highly effective, even when only one out of 6-100 segments in the input data is contaminated.
Problem

Research questions and friction points this paper is trying to address.

Develops a prompt injection attack for multi-source LLM inputs
Addresses uncertainty in segment ordering across different data sources
Optimizes contaminated segments to ensure attack success regardless of arrangement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Order-oblivious loss quantifies attack success across segment arrangements.
OrderGCG algorithm optimizes contaminated segments to minimize this loss.
Technique effective with single contaminated segment among many sources.
🔎 Similar Papers
No similar papers found.