Probing Human Visual Robustness with Neurally-Guided Deep Neural Networks

📅 2024-05-04
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
While human vision exhibits remarkable robustness to dynamic scenes and image perturbations, deep neural networks (DNNs) remain highly vulnerable. It remains unclear whether hierarchical representational evolution along the ventral visual stream (VVS) underpins this biological robustness. Method: We constructed DNNs neurally aligned to VVS layers via fMRI-guided representational alignment and analyzed category manifold geometry across hierarchical levels. Contribution/Results: We discover a consistent hierarchical trend—manifold size decreases while linear separability increases—and demonstrate that this geometric progression alone suffices to drive robustness transitions. Critically, we reveal that hierarchical representational evolution *itself*, rather than merely high-level region specialization, constitutes a core neural mechanism of robustness; moreover, manifold structure—not just feature complexity—is decisive. Remarkably, manifold-guided training alone recapitulates hierarchical robustness gains. Our neurally aligned DNNs achieve significantly improved adversarial accuracy and faithfully reproduce the VVS’s manifold evolutionary trajectory.

Technology Category

Application Category

📝 Abstract
Humans effortlessly navigate the dynamic visual world, yet deep neural networks (DNNs), despite excelling at many visual tasks, are surprisingly vulnerable to minor image perturbations. Past theories suggest that human visual robustness arises from a representational space that evolves along the ventral visual stream (VVS) of the brain to increasingly tolerate object transformations. To test whether robustness is supported by such progression as opposed to being confined exclusively to specialized higher-order regions, we trained DNNs to align their representations with human neural responses from consecutive VVS regions while performing visual tasks. We demonstrate a hierarchical improvement in DNN robustness: alignment to higher-order VVS regions leads to greater improvement. To investigate the mechanism behind such robustness gains, we test a prominent hypothesis that attributes human robustness to the unique geometry of neural category manifolds in the VVS. We first reveal that more desirable manifold properties, specifically, smaller extent and better linear separability, indeed emerge across the human VVS. These properties can be inherited by neurally aligned DNNs and predict their subsequent robustness gains. Furthermore, we show that supervision from neural manifolds alone, via manifold guidance, is sufficient to qualitatively reproduce the hierarchical robustness improvements. Together, these results highlight the critical role of the evolving representational space across VVS in achieving robust visual inference, in part through the formation of more linearly separable category manifolds, which may in turn be leveraged to develop more robust AI systems.
Problem

Research questions and friction points this paper is trying to address.

Investigates why DNNs are vulnerable to image perturbations unlike humans
Tests if human visual robustness stems from ventral visual stream progression
Explores neural manifold properties enabling robust visual inference in humans
Innovation

Methods, ideas, or system contributions that make the work stand out.

Align DNNs with human neural responses
Improve robustness via hierarchical VVS alignment
Guide DNNs using neural manifold properties
🔎 Similar Papers
No similar papers found.
Z
Zhenan Shao
Department of Psychology, University of Illinois Urbana-Champaign; The Beckman Institute, University of Illinois Urbana-Champaign; Department of Computer Science, University of Illinois Urbana-Champaign
Linjian Ma
Linjian Ma
Research scientist, Meta Platforms, Inc.
Numerical AlgorithmsTensorsQuantum SimulationHigh Performance ComputingMachine Learning
B
Bo Li
Department of Computer Science, University of Illinois Urbana-Champaign; Department of Computer Science, University of Chicago
D
Diane M. Beck
Department of Psychology, University of Illinois Urbana-Champaign; The Beckman Institute, University of Illinois Urbana-Champaign