🤖 AI Summary
This paper addresses the problem of robust false discovery rate (FDR) control in distributed multiple testing under Byzantine adversaries that manipulate local p-values. We systematically characterize, for the first time, the mechanisms by which Oracle-type and Benjamini–Hochberg (BH)-driven attacks violate FDR guarantees, propose provably robust distributed correction strategies, and introduce a more challenging adversarial modeling framework. Methodologically, we integrate the BH procedure, stochastic process analysis, adversarially robust statistical inference, and theoretical bound derivation, supported by large-scale simulations. Theoretically, we establish a quantitative threshold relationship between adversary proportion and FDR inflation. Empirically, our approach ensures FDR ≤ α while quantifying the fundamental trade-off between robustness and statistical efficiency; moreover, it uncovers latent threat patterns induced by higher-order adaptive attacks.
📝 Abstract
This work studies distributed multiple testing with false discovery rate (FDR) control in the presence of Byzantine attacks, where an adversary captures a fraction of the nodes and corrupts their reported p-values. We focus on two baseline attack models: an oracle model with the full knowledge of which hypotheses are true nulls, and a practical attack model that leverages the Benjamini-Hochberg (BH) procedure locally to classify which p-values follow the true null hypotheses. We provide a thorough characterization of how both attack models affect the global FDR, which in turn motivates counter-attack strategies and stronger attack models. Our extensive simulation studies confirm the theoretical results, highlight key design trade-offs under attacks and countermeasures, and provide insights into more sophisticated attacks.