🤖 AI Summary
This paper investigates the learnability of distributions under adaptive adversaries—adversaries that observe each query made by the learner in real time and dynamically perturb the returned samples, thereby violating both the i.i.d. assumption and the limitations of oblivious adversaries (which only modify the underlying distribution statically).
Method: We introduce a novel, formal definition of learnability tailored to adaptive adversaries, rigorously characterizing the stronger learning requirements and the critical role of adversarial budget. Our framework integrates probabilistic analysis, adversarial game modeling, and sample complexity theory.
Results: We prove that, under additive perturbations, learnability against adaptive adversaries is strictly stronger than against oblivious adversaries. This yields the first learnability hierarchy grounded explicitly in adversary capability, establishing foundational theoretical guarantees for robust distribution learning.
📝 Abstract
We consider the question of learnability of distribution classes in the presence of adaptive adversaries -- that is, adversaries capable of intercepting the samples requested by a learner and applying manipulations with full knowledge of the samples before passing it on to the learner. This stands in contrast to oblivious adversaries, who can only modify the underlying distribution the samples come from but not their i.i.d. nature. We formulate a general notion of learnability with respect to adaptive adversaries, taking into account the budget of the adversary. We show that learnability with respect to additive adaptive adversaries is a strictly stronger condition than learnability with respect to additive oblivious adversaries.