On Space Folds of ReLU Neural Networks

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the “spatial folding” phenomenon in ReLU neural networks—specifically, how straight-line paths in Euclidean input space map to the Hamming activation space and why their convexity is lost. Method: The authors introduce the first range-based metric framework for quantifying folding, rigorously proving the equivalence between convexity preservation in input space and Hamming activation space. They employ geometric modeling and self-similarity analysis to characterize the folding process. Contribution/Results: The analysis reveals that spatial folding exhibits fractal structure; experiments on CantorNet and MNIST validate the metric’s effectiveness, computational tractability, and self-similarity. Collectively, this work establishes a novel paradigm for understanding the information-geometric nature of ReLU networks and provides an interpretable, quantitative tool for analyzing activation-space transformations.

Technology Category

Application Category

📝 Abstract
Recent findings suggest that the consecutive layers of ReLU neural networks can be understood geometrically as space folding transformations of the input space, revealing patterns of self-similarity. In this paper, we present the first quantitative analysis of this space folding phenomenon in ReLU neural networks. Our approach focuses on examining how straight paths in the Euclidean input space are mapped to their counterparts in the Hamming activation space. In this process, the convexity of straight lines is generally lost, giving rise to non-convex folding behavior. To quantify this effect, we introduce a novel measure based on range metrics, similar to those used in the study of random walks, and provide the proof for the equivalence of convexity notions between the input and activation spaces. Furthermore, we provide empirical analysis on a geometrical analysis benchmark (CantorNet) as well as an image classification benchmark (MNIST). Our work advances the understanding of the activation space in ReLU neural networks by leveraging the phenomena of geometric folding, providing valuable insights on how these models process input information.
Problem

Research questions and friction points this paper is trying to address.

Quantify ReLU network space folding
Map Euclidean to Hamming space paths
Analyze non-convex folding behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantitative analysis of space folding
Novel range metrics for folding behavior
Empirical analysis on geometric benchmarks
🔎 Similar Papers
No similar papers found.
Michal Lewandowski
Michal Lewandowski
Software Competence Center Hagenberg (SCCH)
Hamid Eghbalzadeh
Hamid Eghbalzadeh
Meta
Artificial IntelligenceMachine LearningDeep LearningReinforcement Learning
B
Bernhard Heinzl
Software Competence Center Hagenberg (SCCH)
R
Raphael Pisoni
Software Competence Center Hagenberg (SCCH)
B
Bernhard A.Moser
Software Competence Center Hagenberg (SCCH), Johannes Kepler University of Linz (JKU)