🤖 AI Summary
Existing approaches struggle to uncover the causal influence of hidden neurons on neural network outputs, as activation patterns alone are insufficient to decipher internal computational mechanisms. This work proposes CODEC, a novel method that—unlike prior activation-based analyses—decouples network behavior into interpretable, sparse contribution modes by integrating contribution decomposition with sparse autoencoders. Applying this framework, the study reveals cross-layer causal computation pathways and demonstrates its efficacy in both image classification and retinal neural activity modeling. The approach enables precise intervention and visualization of intermediate layers, uncovers a progressive decoupling of positive and negative contributions in deeper layers, and elucidates how compositional interactions among intermediate neurons give rise to dynamic receptive fields.
📝 Abstract
Understanding how neural networks transform inputs into outputs is crucial for interpreting and manipulating their behavior. Most existing approaches analyze internal representations by identifying hidden-layer activation patterns correlated with human-interpretable concepts. Here we take a direct approach to examine how hidden neurons act to drive network outputs. We introduce CODEC (Contribution Decomposition), a method that uses sparse autoencoders to decompose network behavior into sparse motifs of hidden-neuron contributions, revealing causal processes that cannot be determined by analyzing activations alone. Applying CODEC to benchmark image-classification networks, we find that contributions grow in sparsity and dimensionality across layers and, unexpectedly, that they progressively decorrelate positive and negative effects on network outputs. We further show that decomposing contributions into sparse modes enables greater control and interpretation of intermediate layers, supporting both causal manipulations of network output and human-interpretable visualizations of distinct image components that combine to drive that output. Finally, by analyzing state-of-the-art models of neural activity in the vertebrate retina, we demonstrate that CODEC uncovers combinatorial actions of model interneurons and identifies the sources of dynamic receptive fields. Overall, CODEC provides a rich and interpretable framework for understanding how nonlinear computations evolve across hierarchical layers, establishing contribution modes as an informative unit of analysis for mechanistic insights into artificial neural networks.