Complexity of Deciding Injectivity and Surjectivity of ReLU Neural Networks

📅 2024-05-30
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the computational complexity of deciding injectivity and surjectivity for ReLU neural networks, and their foundational role in formal verification. We prove that injectivity checking is coNP-complete and devise a fixed-parameter tractable (FPT) algorithm parameterized by input dimension. For surjectivity, we establish its equivalence to the zonotope containment problem—thereby proving NP-hardness and strengthening known complexity lower bounds for neural network verification. Furthermore, we provide a complete characterization of surjectivity for two-layer single-output ReLU networks, rigorously reducing the decision problem to classical convex geometric problems. Collectively, these results deliver tight theoretical complexity bounds and novel algorithmic pathways for formal verification of neural networks in safety-critical applications.

Technology Category

Application Category

📝 Abstract
Neural networks with ReLU activation play a key role in modern machine learning. In view of safety-critical applications, the verification of trained networks is of great importance and necessitates a thorough understanding of essential properties of the function computed by a ReLU network, including characteristics like injectivity and surjectivity. Recently, Puthawala et al. [JMLR 2022] came up with a characterization for injectivity of a ReLU layer, which implies an exponential time algorithm. However, the exact computational complexity of deciding injectivity remained open. We answer this question by proving coNP-completeness of deciding injectivity of a ReLU layer. On the positive side, as our main result, we present a parameterized algorithm which yields fixed-parameter tractability of the problem with respect to the input dimension. In addition, we also characterize surjectivity for two-layer ReLU networks with one-dimensional output. Remarkably, the decision problem turns out to be the complement of a basic network verification task. We prove NP-hardness for surjectivity, implying a stronger hardness result than previously known for the network verification problem. Finally, we reveal interesting connections to computational convexity by formulating the surjectivity problem as a zonotope containment problem
Problem

Research questions and friction points this paper is trying to address.

Determining computational complexity of ReLU network injectivity
Verifying neural network outputs for safety-critical applications
Analyzing surjectivity via zonotope containment in computational convexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proves coNP-completeness for ReLU injectivity
Parameterized algorithm for single-layer tractability
Formulates surjectivity as zonotope containment problem
🔎 Similar Papers
No similar papers found.
V
Vincent Froese
Technische Universität Berlin, Faculty IV, Institute of Software Engineering and Theoretical Computer Science, Algorithmics and Computational Complexity.
M
Moritz Grillo
Technische Universität Berlin, Faculty II, Institute of Mathematics, Combinatorial Optimization and Graph Algorithms.
Martin Skutella
Martin Skutella
Einstein Professor of Mathematics and Computer Science, TU Berlin
efficient algorithmsdiscrete mathematicscombinatorial optimizationtheoretical computer science