TabClustPFN: A Prior-Fitted Network for Tabular Data Clustering

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of clustering tabular data—namely, heterogeneous feature types, diverse generative mechanisms, and the absence of transferable inductive biases—by extending Prior-Fitted Networks (PFNs) to unsupervised clustering for the first time. The proposed method leverages pretraining on synthetic data to enable end-to-end inference of both cluster assignments and the number of clusters, without requiring fine-tuning or hyperparameter tuning. It natively supports mixed numerical and categorical features and integrates amortized Bayesian inference with flexible clustering priors. Extensive experiments demonstrate that the approach significantly outperforms conventional, deep learning-based, and amortized clustering methods across a variety of synthetic and real-world tabular datasets, exhibiting strong out-of-the-box robustness and high performance.

Technology Category

Application Category

📝 Abstract
Clustering tabular data is a fundamental yet challenging problem due to heterogeneous feature types, diverse data-generating mechanisms, and the absence of transferable inductive biases across datasets. Prior-fitted networks (PFNs) have recently demonstrated strong generalization in supervised tabular learning by amortizing Bayesian inference under a broad synthetic prior. Extending this paradigm to clustering is nontrivial: clustering is unsupervised, admits a combinatorial and permutation-invariant output space, and requires inferring the number of clusters. We introduce TabClustPFN, a prior-fitted network for tabular data clustering that performs amortized Bayesian inference over both cluster assignments and cluster cardinality. Pretrained on synthetic datasets drawn from a flexible clustering prior, TabClustPFN clusters unseen datasets in a single forward pass, without dataset-specific retraining or hyperparameter tuning. The model naturally handles heterogeneous numerical and categorical features and adapts to a wide range of clustering structures. Experiments on synthetic data and curated real-world tabular benchmarks show that TabClustPFN outperforms classical, deep, and amortized clustering baselines, while exhibiting strong robustness in out-of-the-box exploratory settings. Code is available at https://github.com/Tianqi-Zhao/TabClustPFN.
Problem

Research questions and friction points this paper is trying to address.

tabular data clustering
heterogeneous features
unsupervised learning
cluster cardinality inference
inductive bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prior-Fitted Network
Tabular Clustering
Amortized Bayesian Inference
Unsupervised Learning
Cluster Cardinality Estimation
🔎 Similar Papers
No similar papers found.