Near-Optimal Averaging Samplers and Matrix Samplers

๐Ÿ“… 2024-11-16
๐Ÿ›๏ธ Cybersecurity and Cyberforensics Conference
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses high-precision mean estimation. We design the first efficient averaging sampler achieving both asymptotically optimal randomness complexity and nearly optimal sample complexity: for any $[0,1]$-valued function, it attains $varepsilon$-accuracy with probability $1-delta$ using only $m + O(log(1/delta))$ random bits and $Oig((1/varepsilon^2 cdot log 1/delta)^{1+alpha}ig)$ samples. We further introduce the first matrix sampler operating under spectral-norm guarantees, supporting $d$-dimensional complex matrix-valued functions and generalizing to arbitrary normed vector spaces. Technically, we integrate pseudorandom extractors, list-decodable codes, and spectral analysis to devise novel combinatorial constructions and probabilistic compression techniques. These yield an extractor with seed length nearly matching the information-theoretic lower bound (up to constant factors) in randomness usage, while sample complexity approaches the theoretical optimumโ€”deviating only by a negligible exponential factor.

Technology Category

Application Category

๐Ÿ“ Abstract
We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any $delta<varepsilon$ and any constant $alpha>0$, our sampler uses $m + O(log (1 / delta))$ random bits to output $t = O((frac{1}{varepsilon^2} log frac{1}{delta})^{1 + alpha})$ samples $Z_1, dots, Z_t in {0, 1}^m$ such that for any function $f: {0, 1}^m o [0, 1]$, [ Prleft[left|frac{1}{t}sum_{i=1}^t f(Z_i) - mathbb{E}[f] ight| leq varepsilon ight] geq 1 - delta. ] The randomness complexity is optimal up to a constant factor, and the sample complexity is optimal up to the $O((frac{1}{varepsilon^2} log frac{1}{delta})^{alpha})$ factor. Our technique generalizes to matrix samplers. A matrix sampler is defined similarly, except that $f: {0, 1}^m o mathbb{C}^{d imes d}$ and the absolute value is replaced by the spectral norm. Our matrix sampler achieves randomness complexity $m + ilde O (log(d / delta))$ and sample complexity $ O((frac{1}{varepsilon^2} log frac{d}{delta})^{1 + alpha})$ for any constant $alpha>0$, both near-optimal with only a logarithmic factor in randomness complexity and an additional $alpha$ exponent on the sample complexity. We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor above 1, when the min-entropy $k = eta n$ for a large enough constant $eta<1$.
Problem

Research questions and friction points this paper is trying to address.

Develops efficient averaging sampler with optimal randomness and near-optimal sample complexity
Extends technique to matrix samplers for spectral norm approximation
Constructs extractors with near-optimal seed length for min-entropy sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient averaging sampler with optimal randomness complexity
Matrix sampler with near-optimal spectral norm guarantees
Generalized sampler definition for any normed vector space
๐Ÿ”Ž Similar Papers
No similar papers found.