On Surjectivity of Neural Networks: Can you elicit any behavior from your model?

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the surjectivity of generative neural networks: whether, for any target output, there exists an input that exactly produces it. Surjectivity directly implies security risks—harmful content can, in principle, be adversarially constructed if the model is surjective. Methodologically, the authors integrate functional analysis and dynamical systems theory to establish formal, architecture-agnostic criteria for surjectivity. They prove, for the first time, that pre-layer normalization combined with linear attention modules is almost surely surjective under generic conditions. Consequently, standard Transformer-based architectures (e.g., GPT) and deterministic ODE-based diffusion models are globally surjective—i.e., admit exact inverse mappings for all outputs. This result reveals a fundamental theoretical limitation: mainstream generative models inherently lack immunity to certain classes of adversarial attacks. The work provides the first mathematically rigorous, surjectivity-based framework for assessing generative model security.

Technology Category

Application Category

📝 Abstract
Given a trained neural network, can any specified output be generated by some input? Equivalently, does the network correspond to a function that is surjective? In generative models, surjectivity implies that any output, including harmful or undesirable content, can in principle be generated by the networks, raising concerns about model safety and jailbreak vulnerabilities. In this paper, we prove that many fundamental building blocks of modern neural architectures, such as networks with pre-layer normalization and linear-attention modules, are almost always surjective. As corollaries, widely used generative frameworks, including GPT-style transformers and diffusion models with deterministic ODE solvers, admit inverse mappings for arbitrary outputs. By studying surjectivity of these modern and commonly used neural architectures, we contribute a formalism that sheds light on their unavoidable vulnerability to a broad class of adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Investigating neural network surjectivity for arbitrary output generation
Proving modern architectures like transformers are almost always surjective
Analyzing unavoidable vulnerabilities to adversarial attacks in generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proving neural networks are surjective
Analyzing GPT transformers and diffusion models
Formalizing vulnerability to adversarial attacks
🔎 Similar Papers
No similar papers found.