Prompt-in-Content Attacks: Exploiting Uploaded Inputs to Hijack LLM Behavior

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces and systematically investigates “prompt-in-content attacks”: adversarial instructions embedded within user-uploaded documents or pasted text, exploiting insufficient input isolation and prompt concatenation mechanisms in large language models (LLMs) to induce biased, hallucinated, or misleading outputs in tasks such as summarization and question answering. We construct diverse adversarial examples and conduct empirical evaluations across multiple real-world scenarios on mainstream LLM platforms—including ChatGPT, Claude, and Llama series—complemented by processing-flow provenance analysis. Our results demonstrate the high feasibility and broad impact of such attacks in practical deployments. The study identifies a critical security blind spot in current LLM architectures and preliminarily explores detection and mitigation strategies. By uncovering fundamental vulnerabilities in prompt handling, this work establishes both theoretical foundations and empirical evidence for developing robust, secure prompt engineering paradigms.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are widely deployed in applications that accept user-submitted content, such as uploaded documents or pasted text, for tasks like summarization and question answering. In this paper, we identify a new class of attacks, prompt in content injection, where adversarial instructions are embedded in seemingly benign inputs. When processed by the LLM, these hidden prompts can manipulate outputs without user awareness or system compromise, leading to biased summaries, fabricated claims, or misleading suggestions. We demonstrate the feasibility of such attacks across popular platforms, analyze their root causes including prompt concatenation and insufficient input isolation, and discuss mitigation strategies. Our findings reveal a subtle yet practical threat in real-world LLM workflows.
Problem

Research questions and friction points this paper is trying to address.

Hijacking LLM behavior through uploaded adversarial inputs
Exploiting prompt injection in benign user-submitted content
Manipulating LLM outputs without system compromise awareness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embed adversarial instructions in benign inputs
Manipulate LLM outputs without system compromise
Analyze root causes like prompt concatenation issues
🔎 Similar Papers
No similar papers found.