🤖 AI Summary
This study investigates the invariance mechanisms underlying feature composition in high-level visual recognition and their implications for adversarial vulnerability. We propose Stretch-and-Squeeze (SnS), a gradient-free, model-agnostic bi-objective optimization framework that jointly characterizes representation invariance and activation suppression. Our contributions include: (i) the first bias-agnostic bi-objective paradigm for invariance analysis; (ii) the first disentanglement of semantic invariances—such as brightness, texture, and pose—across pixel-level, mid-level, and high-level representations; and (iii) the discovery that invariant images synthesized by robust CNNs substantially enhance human recognizability. Experiments demonstrate that SnS generates strongly invariant images surpassing affine transformations; mid-level perturbations predominantly alter texture, whereas high-level perturbations govern pose changes; and invariant images derived from robust models achieve significantly higher human recognition accuracy than those from standard models.
📝 Abstract
Uncovering which features'combinations high-level visual units encode is critical to understand how images are transformed into representations that support recognition. While existing feature visualization approaches typically infer a unit's most exciting images, this is insufficient to reveal the manifold of transformations under which responses remain invariant, which is key to generalization in vision. Here we introduce Stretch-and-Squeeze (SnS), an unbiased, model-agnostic, and gradient-free framework to systematically characterize a unit's invariance landscape and its vulnerability to adversarial perturbations in both biological and artificial visual systems. SnS frames these transformations as bi-objective optimization problems. To probe invariance, SnS seeks image perturbations that maximally alter the representation of a reference stimulus in a given processing stage while preserving unit activation. To probe adversarial sensitivity, SnS seeks perturbations that minimally alter the stimulus while suppressing unit activation. Applied to convolutional neural networks (CNNs), SnS revealed image variations that were further from a reference image in pixel-space than those produced by affine transformations, while more strongly preserving the target unit's response. The discovered invariant images differed dramatically depending on the choice of image representation used for optimization: pixel-level changes primarily affected luminance and contrast, while stretching mid- and late-layer CNN representations altered texture and pose respectively. Notably, the invariant images from robust networks were more recognizable by human subjects than those from standard networks, supporting the higher fidelity of robust CNNs as models of the visual system.