🤖 AI Summary
Traditional data depth measures face computational and theoretical bottlenecks in high-dimensional settings. To address this, we propose *loss depth*, a novel framework that defines data depth as the minimum empirical risk over a family of classifiers under a given label—replacing quantile-based mechanisms with loss functions (e.g., hinge or logistic loss). This formulation unifies classical models such as SVM and logistic regression, and—crucially—reinterprets halfspace depth from a statistical risk perspective for the first time. Our framework establishes a rigorous theoretical link between classifier complexity and the geometric structure of data. Loss depth inherits the computational efficiency of its base classifiers, ensuring scalability and statistical consistency. Empirical evaluation demonstrates its superior performance in high-dimensional anomaly detection compared to existing depth methods. The approach thus offers a principled, computationally efficient, and interpretable alternative grounded in statistical learning theory.
📝 Abstract
Data depths are score functions that quantify in an unsupervised fashion how central is a point inside a distribution, with numerous applications such as anomaly detection, multivariate or functional data analysis, arising across various fields. The halfspace depth was the first depth to aim at generalising the notion of quantile beyond the univariate case. Among the existing variety of depth definitions, it remains one of the most used notions of data depth. Taking a different angle from the quantile point of view, we show that the halfspace depth can also be regarded as the minimum loss of a set of classifiers for a specific labelling of the points. By changing the loss or the set of classifiers considered, this new angle naturally leads to a family of "loss depths", extending to well-studied classifiers such as, e.g., SVM or logistic regression, among others. This framework directly inherits computational efficiency of existing machine learning algorithms as well as their fast statistical convergence rates, and opens the data depth realm to the high-dimensional setting. Furthermore, the new loss depths highlight a connection between the dataset and the right amount of complexity or simplicity of the classifiers. The simplicity of classifiers as well as the interpretation as a risk makes our new kind of data depth easy to explain, yet efficient for anomaly detection, as is shown by experiments.