🤖 AI Summary
This study addresses the lack of structured theoretical foundations for representation mechanisms in deep neural networks (DNNs), particularly regarding how DNNs learn interpretable and robust linear representations from noisy inputs. To this end, we propose the **Input-Space Linearity Hypothesis (ISLH)** and introduce the **Spectral Principal Path (SPP) framework**, the first to characterize the dynamical process of layer-wise representation distillation and convergence via spectral analysis. By modeling principal paths, analyzing interpretability along linear directions, and conducting empirical validation on multimodal large language models, we demonstrate that deep networks progressively converge—along a small set of dominant spectral directions—to human-interpretable concept subspaces. This mechanism substantially enhances representation transparency, cross-domain robustness, and fairness. Extensive experiments on vision-language models confirm its generalizability and effectiveness.
📝 Abstract
High-level representations have become a central focus in enhancing AI transparency and control, shifting attention from individual neurons or circuits to structured semantic directions that align with human-interpretable concepts. Motivated by the Linear Representation Hypothesis (LRH), we propose the Input-Space Linearity Hypothesis (ISLH), which posits that concept-aligned directions originate in the input space and are selectively amplified with increasing depth. We then introduce the Spectral Principal Path (SPP) framework, which formalizes how deep networks progressively distill linear representations along a small set of dominant spectral directions. Building on this framework, we further demonstrate the multimodal robustness of these representations in Vision-Language Models (VLMs). By bridging theoretical insights with empirical validation, this work advances a structured theory of representation formation in deep networks, paving the way for improving AI robustness, fairness, and transparency.