🤖 AI Summary
Traditional centrality measures suffer from high computational cost, instability, and poor scalability on high-dimensional and non-Euclidean data. To address this, we propose FUSE—a trainable neural centrality framework. FUSE employs a global-local dual-head architecture: the global head learns ranking-invariant centrality via pairwise distance comparisons, while the local head models smooth log-density potentials through denoising score matching. A single learnable parameter adaptively fuses these heads, enabling, for the first time, anchor-free unification of ranking learning and density estimation. The framework operates on arbitrary data representations and outputs deep, multi-scale, structure-aware rank-like centrality scores. Evaluated on synthetic data and real-world benchmarks—including image, time-series, and text domains—FUSE matches or exceeds classical methods in anomaly detection, while offering superior inference efficiency and robustness.
📝 Abstract
Measuring how central or typical a data point is underpins robust estimation, ranking, and outlier detection, but classical depth notions become expensive and unstable in high dimensions and are hard to extend beyond Euclidean data. We introduce Fused Unified centrality Score Estimation (FUSE), a neural centrality framework that operates on top of arbitrary representations. FUSE combines a global head, trained from pairwise distance-based comparisons to learn an anchor-free centrality score, with a local head, trained by denoising score matching to approximate a smoothed log-density potential. A single parameter between 0 and 1 interpolates between these calibrated signals, yielding depth-like centrality from different views via one forward pass. Across synthetic distributions, real images, time series, and text data, and standard outlier detection benchmarks, FUSE recovers meaningful classical ordering, reveals multi-scale geometric structures, and attains competitive performance with strong classical baselines while remaining simple and efficient.