Sample-Adaptivity Tradeoff in On-Demand Sampling

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the trade-off between sample complexity and round complexity in multi-distribution learning under on-demand sampling. We propose a novel framework—On-Demand Distribution Sampling (OODS)—that unifies modeling for both realizable and universally adversarial settings. Leveraging adaptive sampling, statistical learning theory, and complexity analysis, we design near-optimal algorithms and establish tight upper and lower bounds. In the realizable setting, our algorithm achieves optimal sample complexity $ ilde{O}(d k^{Theta(1/r)} / varepsilon)$; in the adversarial setting, it attains sample complexity $ ilde{O}((d + k)/varepsilon^2)$ within $ ilde{O}(sqrt{k})$ rounds. Crucially, we identify that achieving subpolynomial round complexity necessitates fundamentally new techniques—thereby exposing and overcoming inherent limitations of existing approaches. Our work provides the first rigorous characterization of this trade-off and advances the theoretical foundations of interactive multi-distribution learning.

Technology Category

Application Category

📝 Abstract
We study the tradeoff between sample complexity and round complexity in on-demand sampling, where the learning algorithm adaptively samples from $k$ distributions over a limited number of rounds. In the realizable setting of Multi-Distribution Learning (MDL), we show that the optimal sample complexity of an $r$-round algorithm scales approximately as $dk^{Theta(1/r)} / epsilon$. For the general agnostic case, we present an algorithm that achieves near-optimal sample complexity of $widetilde O((d + k) / epsilon^2)$ within $widetilde O(sqrt{k})$ rounds. Of independent interest, we introduce a new framework, Optimization via On-Demand Sampling (OODS), which abstracts the sample-adaptivity tradeoff and captures most existing MDL algorithms. We establish nearly tight bounds on the round complexity in the OODS setting. The upper bounds directly yield the $widetilde O(sqrt{k})$-round algorithm for agnostic MDL, while the lower bounds imply that achieving sub-polynomial round complexity would require fundamentally new techniques that bypass the inherent hardness of OODS.
Problem

Research questions and friction points this paper is trying to address.

Studies tradeoff between sample complexity and round complexity in on-demand sampling
Develops algorithms for Multi-Distribution Learning in realizable and agnostic settings
Introduces Optimization via On-Demand Sampling framework to capture adaptivity tradeoffs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-round adaptive sampling from k distributions
OODS framework abstracts sample-adaptivity tradeoffs
Near-optimal sample complexity with sublinear round complexity
🔎 Similar Papers
No similar papers found.