Clustering High-dimensional Data: Balancing Abstraction and Representation Tutorial at AAAI 2026

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clustering high-dimensional data inherently involves a trade-off between abstraction—discarding redundant information—and representation—preserving discriminative structures. This work systematically examines the design principles and limitations of prevalent approaches, including K-means, subspace clustering, and deep clustering, and proposes a framework that explicitly decouples the latent space to separate clustering-relevant from clustering-irrelevant information. By doing so, the method enables a controllable balance between abstraction and representation. The proposed framework not only elucidates a key mechanism by which current deep clustering methods avoid degenerating into pure representation learning but also opens new avenues for developing more efficient and interpretable adaptive clustering algorithms.

Technology Category

Application Category

📝 Abstract
How to find a natural grouping of a large real data set? Clustering requires a balance between abstraction and representation. To identify clusters, we need to abstract from superfluous details of individual objects. But we also need a rich representation that emphasizes the key features shared by groups of objects that distinguish them from other groups of objects. Each clustering algorithm implements a different trade-off between abstraction and representation. Classical K-means implements a high level of abstraction - details are simply averaged out - combined with a very simple representation - all clusters are Gaussians in the original data space. We will see how approaches to subspace and deep clustering support high-dimensional and complex data by allowing richer representations. However, with increasing representational expressiveness comes the need to explicitly enforce abstraction in the objective function to ensure that the resulting method performs clustering and not just representation learning. We will see how current deep clustering methods define and enforce abstraction through centroid-based and density-based clustering losses. Balancing the conflicting goals of abstraction and representation is challenging. Ideas from subspace clustering help by learning one latent space for the information that is relevant to clustering and another latent space to capture all other information in the data. The tutorial ends with an outlook on future research in clustering. Future methods will more adaptively balance abstraction and representation to improve performance, energy efficiency and interpretability. By automatically finding the sweet spot between abstraction and representation, the human brain is very good at clustering and other related tasks such as single-shot learning. So, there is still much room for improvement.
Problem

Research questions and friction points this paper is trying to address.

clustering
high-dimensional data
abstraction
representation
subspace clustering
Innovation

Methods, ideas, or system contributions that make the work stand out.

abstraction-representation trade-off
deep clustering
subspace clustering
latent space disentanglement
clustering loss design
🔎 Similar Papers
No similar papers found.