High-Dimensional Gaussian Mean Estimation under Realizable Contamination

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the computational and statistical complexity of robust high-dimensional Gaussian mean estimation under the ε-contamination model, where the corruption mechanism may depend on the data but is constrained by a contamination rate ε. By establishing lower bounds via the Statistical Query (SQ) model, the low-degree polynomial method, and polynomial threshold function (PTF) tests, the study reveals—for the first time—an information-computation gap for this problem. Complementing these hardness results, the authors develop an efficient robust estimator whose performance nearly matches the established lower bounds. The analysis provides a systematic characterization of the fundamental trade-off between sample complexity and computational complexity, achieving near-optimal performance in the sample–time trade-off and offering a qualitative understanding of the overall complexity landscape of this estimation task.

Technology Category

Application Category

📝 Abstract
We study mean estimation for a Gaussian distribution with identity covariance in $\mathbb{R}^d$ under a missing data scheme termed realizable $ε$-contamination model. In this model an adversary can choose a function $r(x)$ between 0 and $ε$ and each sample $x$ goes missing with probability $r(x)$. Recent work Ma et al., 2024 proposed this model as an intermediate-strength setting between Missing Completely At Random (MCAR) -- where missingness is independent of the data -- and Missing Not At Random (MNAR) -- where missingness may depend arbitrarily on the sample values and can lead to non-identifiability issues. That work established information-theoretic upper and lower bounds for mean estimation in the realizable contamination model. Their proposed estimators incur runtime exponential in the dimension, leaving open the possibility of computationally efficient algorithms in high dimensions. In this work, we establish an information-computation gap in the Statistical Query model (and, as a corollary, for Low-Degree Polynomials and PTF tests), showing that algorithms must either use substantially more samples than information-theoretically necessary or incur exponential runtime. We complement our SQ lower bound with an algorithm whose sample-time tradeoff nearly matches our lower bound. Together, these results qualitatively characterize the complexity of Gaussian mean estimation under $ε$-realizable contamination.
Problem

Research questions and friction points this paper is trying to address.

Gaussian mean estimation
realizable contamination
missing data
high-dimensional statistics
information-computation gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

realizable contamination
Gaussian mean estimation
Statistical Query lower bound
information-computation gap
high-dimensional statistics
🔎 Similar Papers