🤖 AI Summary
This work addresses the lack of theoretical guarantees for low-dimensional marginal approximation in large graph models. We propose an efficient, structure-aware marginal inference framework grounded in local graph topology. Our key contributions are threefold: (i) We introduce Stein’s method to marginal analysis for the first time, defining a δ-locality condition and deriving dimension-free bounds on marginal approximation error; this condition naturally subsumes common structural assumptions such as sparsity. (ii) We design two localized algorithms—local likelihood-guided subspace projection and local score matching—enabling both sampling and density estimation to be performed locally. (iii) Theoretical analysis and empirical evaluation demonstrate that our approach substantially reduces sample complexity and computational cost, supports parallel implementation, and achieves both high efficiency and accuracy in high-dimensional marginal inference.
📝 Abstract
Many spatial models exhibit locality structures that effectively reduce their intrinsic dimensionality, enabling efficient approximation and sampling of high-dimensional distributions. However, existing approximation techniques mainly focus on joint distributions, and do not guarantee accuracy for low-dimensional marginals. By leveraging the locality structures, we establish a dimension independent uniform error bound for the marginals of approximate distributions. Inspired by the Stein's method, we introduce a novel $delta$-locality condition that quantifies the locality in distributions, and link it to the structural assumptions such as the sparse graphical models. The theoretical guarantee motivates the localization of existing sampling methods, as we illustrate through the localized likelihood-informed subspace method and localized score matching. We show that by leveraging the locality structure, these methods greatly reduce the sample complexity and computational cost via localized and parallel implementations.