Concealment of Intent: A Game-Theoretic Analysis

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses a novel “intent-hiding adversarial prompting” attack against large language models (LLMs), wherein malicious intent is concealed by composing general-purpose skills to evade existing alignment mechanisms. We propose the first game-theoretic framework for modeling and analyzing this attack-defense interaction. Methodologically, we formulate a multi-stage defense game with prompt- and response-level filtering, formally characterizing the attacker’s structural advantages; we theoretically identify skill composition as the root cause of the attack’s scalability and stealth, and design corresponding defensive mechanisms grounded in this analysis. Experiments across multiple mainstream LLMs demonstrate that our attack significantly outperforms prior adversarial prompting methods, while our defense effectively mitigates intent-hiding threats and improves model robustness. The core contribution lies in establishing an interpretable, game-theoretic model of intent-hiding attacks and unifying attack analysis with principled defense design.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) grow more capable, concerns about their safe deployment have also grown. Although alignment mechanisms have been introduced to deter misuse, they remain vulnerable to carefully designed adversarial prompts. In this work, we present a scalable attack strategy: intent-hiding adversarial prompting, which conceals malicious intent through the composition of skills. We develop a game-theoretic framework to model the interaction between such attacks and defense systems that apply both prompt and response filtering. Our analysis identifies equilibrium points and reveals structural advantages for the attacker. To counter these threats, we propose and analyze a defense mechanism tailored to intent-hiding attacks. Empirically, we validate the attack's effectiveness on multiple real-world LLMs across a range of malicious behaviors, demonstrating clear advantages over existing adversarial prompting techniques.
Problem

Research questions and friction points this paper is trying to address.

Analyzing adversarial intent concealment in large language models
Modeling game-theoretic interactions between attacks and defenses
Proposing defense mechanisms against intent-hiding adversarial prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intent-hiding adversarial prompting for attacks
Game-theoretic framework for attack-defense analysis
Tailored defense mechanism against intent-hiding
🔎 Similar Papers
No similar papers found.