Can adversarial attacks by large language models be attributed?

📅 2024-11-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the attribution problem for large language model (LLM) outputs under adversarial settings—i.e., whether a given text can be uniquely traced to a specific LLM. Method: Motivated by cybersecurity threats and disinformation, we formally model LLM outputs as formal languages and apply Gold–Angluin learnability theory to rigorously analyze identifiability. Contribution/Results: Under realistic assumptions, we prove that LLM outputs are fundamentally non-identifiable—i.e., deterministic, unique attribution is theoretically impossible. This impossibility stems from intrinsic limitations in formal language class learnability and the expressive boundaries of Transformer architectures, and holds irrespective of access level (e.g., white-box model access) or observational capability (e.g., complete output logging). Our result establishes, for the first time, a foundational theoretical barrier to LLM attribution grounded in formal language theory—providing a rigorous basis for AI security guarantees and accountability frameworks.

Technology Category

Application Category

📝 Abstract
Attributing outputs from Large Language Models (LLMs) in adversarial settings-such as cyberattacks and disinformation-presents significant challenges that are likely to grow in importance. We investigate this attribution problem using formal language theory, specifically language identification in the limit as introduced by Gold and extended by Angluin. By modeling LLM outputs as formal languages, we analyze whether finite text samples can uniquely pinpoint the originating model. Our results show that due to the non-identifiability of certain language classes, under some mild assumptions about overlapping outputs from fine-tuned models it is theoretically impossible to attribute outputs to specific LLMs with certainty. This holds also when accounting for expressivity limitations of Transformer architectures. Even with direct model access or comprehensive monitoring, significant computational hurdles impede attribution efforts. These findings highlight an urgent need for proactive measures to mitigate risks posed by adversarial LLM use as their influence continues to expand.
Problem

Research questions and friction points this paper is trying to address.

Attributing adversarial outputs from LLMs is challenging
Determining if text samples uniquely identify LLMs is complex
Exponential growth of model origins makes attribution impractical
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using formal language theory for LLM attribution
Analyzing finite text samples for model identification
Quantifying combinatorial growth of plausible model origins
🔎 Similar Papers
No similar papers found.