On the Use of Bagging for Local Intrinsic Dimensionality Estimation

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high variance in local intrinsic dimensionality (LID) estimation caused by data sparsity in small neighborhoods, which undermines estimation accuracy. To mitigate this issue, the study introduces subbagging—a variant of bootstrap aggregation—into LID estimation for the first time, constructing an ensemble method that substantially reduces estimation variance while preserving the local distribution of nearest-neighbor distances. Through a systematic analysis of the interplay among sampling rate, neighborhood size (k), and ensemble scale, combined with a neighborhood smoothing strategy, the proposed approach effectively enhances estimation precision. Experimental results demonstrate that the method consistently achieves significantly lower mean squared error across a broad range of hyperparameters, while enabling controlled bias adjustment, thereby outperforming existing state-of-the-art techniques in overall performance.

Technology Category

Application Category

📝 Abstract
The theory of Local Intrinsic Dimensionality (LID) has become a valuable tool for characterizing local complexity within and across data manifolds, supporting a range of data mining and machine learning tasks. Accurate LID estimation requires samples drawn from small neighborhoods around each query to avoid biases from nonlocal effects and potential manifold mixing, yet limited data within such neighborhoods tends to cause high estimation variance. As a variance reduction strategy, we propose an ensemble approach that uses subbagging to preserve the local distribution of nearest neighbor (NN) distances. The main challenge is that the uniform reduction in total sample size within each subsample increases the proximity threshold for finding a fixed number k of NNs around the query. As a result, in the specific context of LID estimation, the sampling rate has an additional, complex interplay with the neighborhood size, where both combined determine the sample size as well as the locality and resolution considered for estimation. We analyze both theoretically and experimentally how the choice of the sampling rate and the k-NN size used for LID estimation, alongside the ensemble size, affects performance, enabling informed prior selection of these hyper-parameters depending on application-based preferences. Our results indicate that within broad and well-characterized regions of the hyper-parameters space, using a bagged estimator will most often significantly reduce variance as well as the mean squared error when compared to the corresponding non-bagged baseline, with controllable impact on bias. We additionally propose and evaluate different ways of combining bagging with neighborhood smoothing for substantial further improvements on LID estimation performance.
Problem

Research questions and friction points this paper is trying to address.

Local Intrinsic Dimensionality
variance reduction
bagging
nearest neighbor distances
hyper-parameter selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local Intrinsic Dimensionality
Bagging
Subbagging
k-NN
Variance Reduction
🔎 Similar Papers
No similar papers found.