Amulet: a Python Library for Assessing Interactions Among ML Defenses and Risks

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning models exhibit unintended cross-risk interactions among security, privacy, and fairness—where mitigating one risk often exacerbates others—yet no systematic evaluation framework exists. Method: We introduce the first Python library enabling unified assessment of such cross-risk interactions, featuring a modular, object-oriented architecture that integrates representative attacks, defenses, and evaluation metrics under a consistent API with plug-and-play extensibility. Contribution/Results: The library is comprehensive, scalable, and user-friendly, enabling large-scale pre-deployment joint-risk analysis. It quantifies previously unexplored interactions (e.g., adversarial robustness vs. membership inference vs. demographic parity) for the first time and facilitates side-effect-free defense optimization. Empirical validation confirms its effectiveness in guiding robust, multi-objective defense design.

Technology Category

Application Category

📝 Abstract
ML models are susceptible to risks to security, privacy, and fairness. Several defenses are designed to protect against their intended risks, but can inadvertently affect susceptibility to other unrelated risks, known as unintended interactions. Several jurisdictions are preparing ML regulatory frameworks that require ML practitioners to assess the susceptibility of ML models to different risks. A library for valuating unintended interactions that can be used by (a) practitioners to evaluate unintended interactions at scale prior to model deployment and (b) researchers to design defenses which do not suffer from an unintended increase in unrelated risks. Ideally, such a library should be i) comprehensive by including representative attacks, defenses and metrics for different risks, ii) extensible to new modules due to its modular design, iii) consistent with a user-friendly API template for inputs and outputs, iv) applicable to evaluate previously unexplored unintended interactions. We present AMULET, a Python library that covers risks to security, privacy, and fairness, which satisfies all these requirements. AMULET can be used to evaluate unexplored unintended interactions, compare effectiveness between defenses or attacks, and include new attacks and defenses.
Problem

Research questions and friction points this paper is trying to address.

Assessing unintended interactions among ML defenses and risks
Evaluating susceptibility to security, privacy, and fairness risks
Providing comprehensive extensible library for ML risk assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Python library for ML defense interactions
Modular design for extensible risk assessment
User-friendly API for evaluating unintended interactions
🔎 Similar Papers
No similar papers found.