🤖 AI Summary
Existing neural network probing methods often rely on input perturbations or parameter analysis, which struggle to uncover structured information embedded in intermediate representations. This work proposes APEX, a novel probing paradigm that perturbs hidden activations during inference while keeping both inputs and model parameters fixed. APEX formalizes activation perturbation as a general probing framework, unifying and extending prior approaches such as input perturbation as special cases, and enabling a controllable transition from sample-dependent to model-dependent behavioral analysis. Experiments demonstrate that APEX effectively quantifies representational structure, distinguishes models trained on structured versus random labels, reveals semantically coherent prediction transitions, and precisely identifies the concentration of predictions toward target classes in backdoor attacks.
📝 Abstract
Prior work on probing neural networks primarily relies on input-space analysis or parameter perturbation, both of which face fundamental limitations in accessing structural information encoded in intermediate representations. We introduce Activation Perturbation for EXploration (APEX), an inference-time probing paradigm that perturbs hidden activations while keeping both inputs and model parameters fixed. We theoretically show that activation perturbation induces a principled transition from sample-dependent to model-dependent behavior by suppressing input-specific signals and amplifying representation-level structure, and further establish that input perturbation corresponds to a constrained special case of this framework. Through representative case studies, we demonstrate the practical advantages of APEX. In the small-noise regime, APEX provides a lightweight and efficient measure of sample regularity that aligns with established metrics, while also distinguishing structured from randomly labeled models and revealing semantically coherent prediction transitions. In the large-noise regime, APEX exposes training-induced model-level biases, including a pronounced concentration of predictions on the target class in backdoored models. Overall, our results show that APEX offers an effective perspective for exploring, and understanding neural networks beyond what is accessible from input space alone.