🤖 AI Summary
This study addresses the lack of rigorous statistical assessment for the reliability of output structures in complex clustering pipelines that involve multiple data-dependent stages such as anomaly detection, feature selection, and clustering. To bridge this gap, the work systematically applies selective inference to the entire clustering analysis workflow, establishing a statistical framework that enables valid significance testing of final cluster assignments. The proposed method rigorously controls the type I error rate at any pre-specified nominal level and demonstrates strong empirical performance on both synthetic and real-world datasets. By doing so, it provides a principled and reliable foundation for statistical inference in multi-stage, data-driven clustering procedures.
📝 Abstract
A data analysis pipeline is a structured sequence of steps that transforms raw data into meaningful insights by integrating multiple analysis algorithms.In many practical applications, analytical findings are obtained only after data pass through several data-dependent procedures within such pipelines.In this study, we address the problem of quantifying the statistical reliability of results produced by data analysis pipelines.As a proof of concept, we focus on clustering pipelines that identify cluster structures from complex and heterogeneous data through procedures such as outlier detection, feature selection, and clustering.We propose a novel statistical testing framework to assess the significance of clustering results obtained through these pipelines.Our framework, based on selective inference, enables the systematic construction of valid statistical tests for clustering pipelines composed of predefined components.We prove that the proposed test controls the type I error rate at any nominal level and demonstrate its validity and effectiveness through experiments on synthetic and real datasets.