Hierarchical Self-Supervised Adversarial Training for Robust Vision Models in Histopathology

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient adversarial robustness in histopathological image analysis, this paper proposes Hierarchical Self-supervised Adversarial Training (HSAT), the first framework to incorporate patient–slide–patch hierarchical structural modeling into self-supervised adversarial training. HSAT constructs semantically consistent adversarial examples via multi-level contrastive learning and integrates domain-specific pathological priors to achieve multi-granularity feature alignment. On the OpenSRH dataset, HSAT improves robustness by 54.31% under white-box attacks and incurs only a 3–4% performance drop under black-box attacks—substantially outperforming baseline methods (which suffer 25–30% degradation) and establishing a new state-of-the-art benchmark. Key contributions are: (i) the first exploitation of intrinsic hierarchical structure in histopathology to guide adversarial example generation; and (ii) the deep integration of hierarchical modeling with self-supervised contrastive learning and PGD-based adversarial training, jointly optimizing both discriminability and robustness.

Technology Category

Application Category

📝 Abstract
Adversarial attacks pose significant challenges for vision models in critical fields like healthcare, where reliability is essential. Although adversarial training has been well studied in natural images, its application to biomedical and microscopy data remains limited. Existing self-supervised adversarial training methods overlook the hierarchical structure of histopathology images, where patient-slide-patch relationships provide valuable discriminative signals. To address this, we propose Hierarchical Self-Supervised Adversarial Training (HSAT), which exploits these properties to craft adversarial examples using multi-level contrastive learning and integrate it into adversarial training for enhanced robustness. We evaluate HSAT on multiclass histopathology dataset OpenSRH and the results show that HSAT outperforms existing methods from both biomedical and natural image domains. HSAT enhances robustness, achieving an average gain of 54.31% in the white-box setting and reducing performance drops to 3-4% in the black-box setting, compared to 25-30% for the baseline. These results set a new benchmark for adversarial training in this domain, paving the way for more robust models. Our Code for training and evaluation is available at https://github.com/HashmatShadab/HSAT.
Problem

Research questions and friction points this paper is trying to address.

Address adversarial attacks in histopathology vision models
Exploit hierarchical structure for robust adversarial training
Improve model robustness in white-box and black-box settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Self-Supervised Adversarial Training (HSAT)
Multi-level contrastive learning for adversarial examples
Enhanced robustness in histopathology vision models
🔎 Similar Papers
No similar papers found.
Hashmat Shadab Malik
Hashmat Shadab Malik
MBZUAI, UAE
Computer visionAI Safety & Reliability
S
Shahina Kunhimon
Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE
Muzammal Naseer
Muzammal Naseer
Asst. Professor, Khalifa University
Multi-modal LearningAI Safety and Reliability
Fahad Shahbaz Khan
Fahad Shahbaz Khan
MBZUAI, Linköping University Sweden
Computer VisionObject RecognitionGenerative AIAI for Science
S
Salman H. Khan
Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE, Australian National University, Australia