🤖 AI Summary
Deep neural networks suffer from insufficient robustness against both adversarial attacks and natural distribution shifts, hindering their real-world deployment. To address this, we introduce the first open-source toolbox unifying support for adversarial robustness (e.g., PGD- and TRADES-based defenses) and non-adversarial robustness (e.g., corruptions and distribution shifts under ImageNet-C/R/A). Our framework innovatively integrates dual-dimensional benchmarks, standardized evaluation protocols, and reproducible analysis pipelines. Built as a modular, plug-and-play PyTorch library, it supports robust training (e.g., adversarial training, robust pretraining), corruption-aware and style-based data augmentation (Corruption, StyleAug), and diagnostic visualization. We conduct comprehensive evaluations across standard benchmarks—including ImageNet—and release all code and pretrained models. The toolbox has been widely adopted by the research community, significantly lowering barriers to robust vision research and practical application.
📝 Abstract
Deep neural networks (DNNs) has shown great promise in computer vision tasks. However, machine vision achieved by DNNs cannot be as robust as human perception. Adversarial attacks and data distribution shifts have been known as two major scenarios which degrade machine performance and obstacle the wide deployment of machines"in the wild". In order to break these obstructions and facilitate the research of model robustness, we develop EasyRobust, a comprehensive and easy-to-use toolkit for training, evaluation and analysis of robust vision models. EasyRobust targets at two types of robustness: 1) Adversarial robustness enables the model to defense against malicious inputs crafted by worst-case perturbations, also known as adversarial examples; 2) Non-adversarial robustness enhances the model performance on natural test images with corruptions or distribution shifts. Thorough benchmarks on image classification enable EasyRobust to provide an accurate robustness evaluation on vision models. We wish our EasyRobust can help for training practically-robust models and promote academic and industrial progress in closing the gap between human and machine vision. Codes and models of EasyRobust have been open-sourced in https://github.com/alibaba/easyrobust.