🤖 AI Summary
This work exposes a critical vulnerability in passively secure multi-party computation (MPC) protocols for privacy-preserving machine learning (PPML): actively malicious adversaries can mount undetectable covert attacks that compromise both model integrity and data privacy—even enabling precise reconstruction of original training samples with near-zero detection risk. The authors construct the first provably undetectable malicious adversary model, integrating gradient/intermediate-value inversion, exploitation of protocol logic flaws, and differential reconstruction techniques. They demonstrate practical feasibility on mainstream frameworks—including ABY3 and Cheetah—achieving over 95% success rates in both data reconstruction and backdoor injection. This result fundamentally challenges the prevailing assumption that passive security suffices for PPML training, rigorously establishing the necessity of active security mechanisms. The work provides both theoretical foundations and empirical evidence to advance PPML threat modeling and secure protocol design.
📝 Abstract
Secure multiparty computation (MPC) allows data owners to train machine learning models on combined data while keeping the underlying training data private. The MPC threat model either considers an adversary who passively corrupts some parties without affecting their overall behavior, or an adversary who actively modifies the behavior of corrupt parties. It has been argued that in some settings, active security is not a major concern, partly because of the potential risk of reputation loss if a party is detected cheating. In this work we show explicit, simple, and effective attacks that an active adversary can run on existing passively secure MPC training protocols, while keeping essentially zero risk of the attack being detected. The attacks we show can compromise both the integrity and privacy of the model, including attacks reconstructing exact training data. Our results challenge the belief that a threat model that does not include malicious behavior by the involved parties may be reasonable in the context of PPML, motivating the use of actively secure protocols for training.