🤖 AI Summary
Deep neural networks exhibit functional behavior that appears ordered, yet formal quantification of this order remains lacking.
Method: We propose a quantitative framework—“computational frustration” and “near-monotonicity”—grounded in signed graph theory. Pretrained convolutional networks are modeled as signed directed graphs; their topological order is systematically assessed via structural balance analysis and analogy to Ising spin glasses.
Contribution/Results: Experiments across diverse architectures and datasets reveal that mainstream pretrained models consistently exhibit significantly lower frustration than null models (random graphs) and strong near-monotonicity—indicating functional behavior far more ordered than random expectations. This suggests an implicit regularization mechanism: during training, networks spontaneously evolve toward structurally balanced states. The phenomenon is robust across model families and data domains, offering a statistical-physics–inspired explanation for deep learning generalization and robustness.
📝 Abstract
For the signed graph associated to a deep neural network, one can compute the frustration level, i.e., test how close or distant the graph is to structural balance. For all the pretrained deep convolutional neural networks we consider, we find that the frustration is always less than expected from null models. From a statistical physics point of view, and in particular in reference to an Ising spin glass model, the reduced frustration indicates that the amount of disorder encoded in the network is less than in the null models. From a functional point of view, low frustration (i.e., proximity to structural balance) means that the function representing the network behaves near-monotonically, i.e., more similarly to a monotone function than in the null models. Evidence of near-monotonic behavior along the partial order determined by frustration is observed for all networks we consider. This confirms that the class of deep convolutional neural networks tends to have a more ordered behavior than expected from null models, and suggests a novel form of implicit regularization.