🤖 AI Summary
Classical multidimensional scaling (CMDS) lacks a statistically rigorous framework for constructing reliable unified confidence sets—up to rigid transformations—for the underlying configuration when confronted with noisy, heterogeneous dissimilarity data.
Method: This paper establishes, for the first time, the distributional convergence theory of CMDS embedding estimators and develops a multiplier bootstrap-based statistical inference framework. The method adaptively accommodates heteroscedastic noise and enables construction of unified confidence sets under the rigid transformation group with asymptotic validity.
Contribution/Results: We prove the asymptotic validity of the proposed bootstrap procedure. Numerical experiments demonstrate substantial improvements in finite-sample coverage probability and estimation precision over existing approaches, especially under the multiplier bootstrap setting. This work provides the first uncertainty quantification tool for CMDS endowed with rigorous statistical guarantees.
📝 Abstract
We develop a formal statistical framework for classical multidimensional scaling (CMDS) applied to noisy dissimilarity data. We establish distributional convergence results for the embeddings produced by CMDS for various noise models, which enable the construction of emph{bona~fide} uniform confidence sets for the latent configuration, up to rigid transformations. We further propose bootstrap procedures for constructing these confidence sets and provide theoretical guarantees for their validity. We find that the multiplier bootstrap adapts automatically to heteroscedastic noise such as multiplicative noise, while the empirical bootstrap seems to require homoscedasticity. Either form of bootstrap, when valid, is shown to substantially improve finite-sample accuracy. The empirical performance of the proposed methods is demonstrated through numerical experiments.