Task complexity shapes internal representations and robustness in neural networks

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how task complexity shapes the topological structure and robustness of internal representations in multilayer perceptrons (MLPs). Method: We propose a model- and modality-agnostic measure of task complexity, modeling MLPs as signed weighted bipartite graphs. Systematic analysis employs network science probes—including weight pruning, binarization, noise injection, sign flipping, and bipartite graph randomization—to characterize representation geometry. Contribution/Results: We find that task difficulty critically governs the emergence of signed bipartite topology in representations: hard tasks yield models highly sensitive to weight binarization, exhibiting sharp phase-transition behavior; in contrast, simple tasks yield more robust representations—retaining high accuracy even with sign-only structure, and occasionally improving under moderate noise, suggesting stochastic resonance-like effects. Crucially, we establish signed bipartite topology as a fundamental geometric invariant of representation learning and quantify its dependence on task difficulty, thereby linking task hardness, representational structure, and robustness in a unified framework.

Technology Category

Application Category

📝 Abstract
Neural networks excel across a wide range of tasks, yet remain black boxes. In particular, how their internal representations are shaped by the complexity of the input data and the problems they solve remains obscure. In this work, we introduce a suite of five data-agnostic probes-pruning, binarization, noise injection, sign flipping, and bipartite network randomization-to quantify how task difficulty influences the topology and robustness of representations in multilayer perceptrons (MLPs). MLPs are represented as signed, weighted bipartite graphs from a network science perspective. We contrast easy and hard classification tasks on the MNIST and Fashion-MNIST datasets. We show that binarizing weights in hard-task models collapses accuracy to chance, whereas easy-task models remain robust. We also find that pruning low-magnitude edges in binarized hard-task models reveals a sharp phase-transition in performance. Moreover, moderate noise injection can enhance accuracy, resembling a stochastic-resonance effect linked to optimal sign flips of small-magnitude weights. Finally, preserving only the sign structure-instead of precise weight magnitudes-through bipartite network randomizations suffices to maintain high accuracy. These phenomena define a model- and modality-agnostic measure of task complexity: the performance gap between full-precision and binarized or shuffled neural network performance. Our findings highlight the crucial role of signed bipartite topology in learned representations and suggest practical strategies for model compression and interpretability that align with task complexity.
Problem

Research questions and friction points this paper is trying to address.

How task complexity affects neural network representations and robustness
Measuring task difficulty impact on MLP topology via data-agnostic probes
Exploring signed bipartite topology's role in model compression strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses data-agnostic probes to analyze representations
Employs bipartite graphs for network science perspective
Measures task complexity via performance gaps
🔎 Similar Papers
No similar papers found.