🤖 AI Summary
This work addresses the problem of density and mode estimation using an expand-and-sparsify representation. By mapping inputs into a high-dimensional sparse space, the authors construct a linear density estimator and develop a corresponding mode recovery algorithm. The proposed method achieves, for the first time, minimax-optimal density estimation under the ℓ∞ norm and attains near-optimal statistical convergence rates—within at most a logarithmic factor—for both unimodal and multimodal distributions under mild conditions. The key innovation lies in the integration of random linear projections with k-sparse coding, which effectively balances estimation accuracy and computational tractability.
📝 Abstract
Expand-and-sparsify representations are a class of theoretical models that capture sparse representation phenomena observed in the sensory systems of many animals. At a high level, these representations map an input $x \in \mathbb{R}^d$ to a much higher dimension $m \gg d$ via random linear projections before zeroing out all but the $k \ll m$ largest entries. The result is a $k$-sparse vector in $\{0,1\}^m$. We study the suitability of this representation for two fundamental statistical problems: density estimation and mode estimation. For density estimation, we show that a simple linear function of the expand-and-sparsify representation produces an estimator with minimax-optimal $\ell_{\infty}$ convergence rates. In mode estimation, we provide simple algorithms on top of our density estimator that recover single or multiple modes at optimal rates up to logarithmic factors under mild conditions.