Like Oil and Water: Group Robustness Methods and Poisoning Defenses May Be at Odds

๐Ÿ“… 2025-04-02
๐Ÿ›๏ธ International Conference on Learning Representations
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper identifies a fundamental tension between group robustness methods and data poisoning defenses, both of which rely on similar heuristic sample filtering mechanisms yet pursue opposing objectivesโ€”group robustness amplifies minority-group samples to improve their performance, whereas poisoning defenses remove anomalous samples to ensure security. Method: Under the realistic setting of unknown group annotations, we provide the first formal impossibility proof demonstrating that standard heuristics cannot distinguish minority-group samples from poisoned samples. Contribution/Results: Empirical evaluation reveals severe trade-offs: group robustness training increases poisoning success rate from 0% to 97%; poisoning defenses reduce minority-group accuracy from 55% to 41%; and combining both approaches fails to mitigate this conflict. The study exposes how benchmark-driven paradigms obscure latent multi-objective conflicts and calls for a new co-design framework jointly optimizing fairness and security.

Technology Category

Application Category

๐Ÿ“ Abstract
Group robustness has become a major concern in machine learning (ML) as conventional training paradigms were found to produce high error on minority groups. Without explicit group annotations, proposed solutions rely on heuristics that aim to identify and then amplify the minority samples during training. In our work, we first uncover a critical shortcoming of these methods: an inability to distinguish legitimate minority samples from poison samples in the training set. By amplifying poison samples as well, group robustness methods inadvertently boost the success rate of an adversary -- e.g., from $0%$ without amplification to over $97%$ with it. Notably, we supplement our empirical evidence with an impossibility result proving this inability of a standard heuristic under some assumptions. Moreover, scrutinizing recent poisoning defenses both in centralized and federated learning, we observe that they rely on similar heuristics to identify which samples should be eliminated as poisons. In consequence, minority samples are eliminated along with poisons, which damages group robustness -- e.g., from $55%$ without the removal of the minority samples to $41%$ with it. Finally, as they pursue opposing goals using similar heuristics, our attempt to alleviate the trade-off by combining group robustness methods and poisoning defenses falls short. By exposing this tension, we also hope to highlight how benchmark-driven ML scholarship can obscure the trade-offs among different metrics with potentially detrimental consequences.
Problem

Research questions and friction points this paper is trying to address.

Group robustness methods fail to distinguish minority from poison samples
Poisoning defenses eliminate minority samples along with poisons
Combining group robustness and poisoning defenses creates trade-offs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group robustness methods amplify minority samples
Poisoning defenses eliminate suspicious samples
Heuristics conflict between robustness and defense
๐Ÿ”Ž Similar Papers
No similar papers found.