🤖 AI Summary
To address the lack of a general explanation mechanism for non-interpretable clustering results, this paper proposes a generic, interpretable clustering framework based on spectral graph partitioning. The method is agnostic to specific clustering objectives and automatically fits any black-box clustering output or raw dataset into an axis-aligned decision tree, yielding structured and human-readable cluster representations. Innovatively, it introduces spectral graph partitioning—first applied to interpretable clustering—to formulate a unified graph optimization model; theoretical interpretability is established within Trevisan’s generalized framework. Moreover, several existing algorithms are unified under this graph-partitioning perspective. Experiments on multiple benchmark datasets demonstrate that the proposed method significantly outperforms mainstream baselines, achieving a superior trade-off between clustering quality and interpretability.
📝 Abstract
Explainable clustering by axis-aligned decision trees was introduced by Moshkovitz et al. (2020) and has gained considerable interest. Prior work has focused on minimizing the price of explainability for specific clustering objectives, lacking a general method to fit an explanation tree to any given clustering, without restrictions. In this work, we propose a new and generic approach to explainable clustering, based on spectral graph partitioning. With it, we design an explainable clustering algorithm that can fit an explanation tree to any given non-explainable clustering, or directly to the dataset itself. Moreover, we show that prior algorithms can also be interpreted as graph partitioning, through a generalized framework due to Trevisan (2013) wherein cuts are optimized in two graphs simultaneously. Our experiments show the favorable performance of our method compared to baselines on a range of datasets.