Toward Efficient Inference Attacks: Shadow Model Sharing via Mixture-of-Experts

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning models are vulnerable to inference attacks such as membership inference, yet conventional shadow-model-based approaches require training numerous independent models, incurring prohibitive computational overhead and limiting practical applicability. To address this, we propose SHAPOOL, a unified framework built upon the Mixture-of-Experts (MoE) architecture. SHAPOOL jointly trains multiple shadow models via path-selection routing, path regularization, and path alignment—enabling shared subnetworks while preserving model diversity and independence. Experimental results demonstrate that SHAPOOL significantly reduces shadow-model construction cost across diverse membership inference attack settings, while maintaining attack accuracy comparable to traditional independently trained shadow models. Consequently, SHAPOOL enhances both the efficiency and scalability of inference attack evaluation without compromising assessment fidelity.

Technology Category

Application Category

📝 Abstract
Machine learning models are often vulnerable to inference attacks that expose sensitive information from their training data. Shadow model technique is commonly employed in such attacks, such as membership inference. However, the need for a large number of shadow models leads to high computational costs, limiting their practical applicability. Such inefficiency mainly stems from the independent training and use of these shadow models. To address this issue, we present a novel shadow pool training framework SHAPOOL, which constructs multiple shared models and trains them jointly within a single process. In particular, we leverage the Mixture-of-Experts mechanism as the shadow pool to interconnect individual models, enabling them to share some sub-networks and thereby improving efficiency. To ensure the shared models closely resemble independent models and serve as effective substitutes, we introduce three novel modules: path-choice routing, pathway regularization, and pathway alignment. These modules guarantee random data allocation for pathway learning, promote diversity among shared models, and maintain consistency with target models. We evaluate SHAPOOL in the context of various membership inference attacks and show that it significantly reduces the computational cost of shadow model construction while maintaining comparable attack performance.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs of shadow models in inference attacks
Enhancing shadow model efficiency via shared Mixture-of-Experts framework
Maintaining attack performance while minimizing shadow model training overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shared shadow models via Mixture-of-Experts
Three modules ensure diversity and consistency
Reduces computational cost while maintaining performance
🔎 Similar Papers
No similar papers found.
L
Li Bai
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
Qingqing Ye
Qingqing Ye
Assistant Professor, The Hong Kong Polytechnic University
data privacy and securityadversarial machine learning
X
Xinwei Zhang
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
S
Sen Zhang
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
Zi Liang
Zi Liang
Hong Kong Polytechnic University
Natural Language ProcessingAI Security
J
Jianliang Xu
Department of Computer Science, Hong Kong Baptist University
H
Haibo Hu
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University; PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems