🤖 AI Summary
This work investigates collision resistance in single-layer neural networks—specifically binary perceptrons—defined as the existence of distinct weight vectors yielding identical classification labels for all inputs. Focusing on oscillatory activation functions, we develop a topological framework analyzing the solution space structure and identify an intrinsic “overlap gap” in the collision manifold: a topologically persistent barrier that impedes algorithmic navigation. Leveraging statistical physics methods, we characterize phase transitions in the solution space and corroborate our analysis via numerical experiments and approximate message-passing algorithms, demonstrating that existing algorithms fail well below the theoretical solvability threshold. This study establishes, for the first time, the inherent computational hardness of neural networks as collision-resistant functions—distinct from lattice-based cryptographic primitives—and introduces a novel paradigm for neural cryptography grounded in structural and topological properties of neural solution spaces.
📝 Abstract
When neural networks are trained to classify a dataset, one finds a set of weights from which the network produces a label for each data point. We study the algorithmic complexity of finding a collision in a single-layer neural net, where a collision is defined as two distinct sets of weights that assign the same labels to all data. For binary perceptrons with oscillating activation functions, we establish the emergence of an overlap gap property in the space of collisions. This is a topological property believed to be a barrier to the performance of efficient algorithms. The hardness is supported by numerical experiments using approximate message passing algorithms, for which the algorithms stop working well below the value predicted by our analysis. Neural networks provide a new category of candidate collision resistant functions, which for some parameter setting depart from constructions based on lattices. Beyond relevance to cryptography, our work uncovers new forms of computational hardness emerging in large neural networks which may be of independent interest.