🤖 AI Summary
This work investigates the surjectivity of generative neural networks: whether, for any target output, there exists an input that exactly produces it. Surjectivity directly implies security risks—harmful content can, in principle, be adversarially constructed if the model is surjective. Methodologically, the authors integrate functional analysis and dynamical systems theory to establish formal, architecture-agnostic criteria for surjectivity. They prove, for the first time, that pre-layer normalization combined with linear attention modules is almost surely surjective under generic conditions. Consequently, standard Transformer-based architectures (e.g., GPT) and deterministic ODE-based diffusion models are globally surjective—i.e., admit exact inverse mappings for all outputs. This result reveals a fundamental theoretical limitation: mainstream generative models inherently lack immunity to certain classes of adversarial attacks. The work provides the first mathematically rigorous, surjectivity-based framework for assessing generative model security.
📝 Abstract
Given a trained neural network, can any specified output be generated by some input? Equivalently, does the network correspond to a function that is surjective? In generative models, surjectivity implies that any output, including harmful or undesirable content, can in principle be generated by the networks, raising concerns about model safety and jailbreak vulnerabilities. In this paper, we prove that many fundamental building blocks of modern neural architectures, such as networks with pre-layer normalization and linear-attention modules, are almost always surjective. As corollaries, widely used generative frameworks, including GPT-style transformers and diffusion models with deterministic ODE solvers, admit inverse mappings for arbitrary outputs. By studying surjectivity of these modern and commonly used neural architectures, we contribute a formalism that sheds light on their unavoidable vulnerability to a broad class of adversarial attacks.