Distributionally-Constrained Adversaries in Online Learning

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the learnability of online learning under distributional constraints—where data are drawn from structured, smooth, or low-complexity distribution families, interpolating between purely stochastic and fully adversarial environments. Methodologically, it integrates combinatorial game theory, Rademacher complexity, and geometric characterization of distribution classes. The work establishes the first necessary and sufficient conditions for learnability against both oblivious and adaptive adversaries, for arbitrary distribution families; strictly generalizes smoothed analysis and related frameworks; and proposes a universal, adaptive algorithm that requires no prior knowledge of the underlying distribution. For natural hypothesis classes such as linear classifiers, it derives prior-free, optimal regret bounds. Crucially, it uncovers a fundamental coupling mechanism: learnability is determined by the interplay between the intrinsic complexity of the function class and the structural constraints imposed on the data-generating distributions.

Technology Category

Application Category

📝 Abstract
There has been much recent interest in understanding the continuum from adversarial to stochastic settings in online learning, with various frameworks including smoothed settings proposed to bridge this gap. We consider the more general and flexible framework of distributionally constrained adversaries in which instances are drawn from distributions chosen by an adversary within some constrained distribution class [RST11]. Compared to smoothed analysis, we consider general distributional classes which allows for a fine-grained understanding of learning settings between fully stochastic and fully adversarial for which a learner can achieve non-trivial regret. We give a characterization for which distribution classes are learnable in this context against both oblivious and adaptive adversaries, providing insights into the types of interplay between the function class and distributional constraints on adversaries that enable learnability. In particular, our results recover and generalize learnability for known smoothed settings. Further, we show that for several natural function classes including linear classifiers, learning can be achieved without any prior knowledge of the distribution class -- in other words, a learner can simultaneously compete against any constrained adversary within learnable distribution classes.
Problem

Research questions and friction points this paper is trying to address.

Characterize learnable distribution classes for constrained adversaries
Generalize learnability for smoothed and adversarial settings
Enable learning without prior knowledge of distribution class
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributionally constrained adversaries framework
Characterization of learnable distribution classes
No prior knowledge needed for learning
🔎 Similar Papers
No similar papers found.