Transformers as Unsupervised Learning Algorithms: A study on Gaussian Mixtures

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the fundamental unsupervised learning capabilities of Transformers, presenting the first systematic statistical estimation–driven analysis of their ability to solve Gaussian Mixture Models (GMMs). We propose TGMM, an end-to-end differentiable framework that jointly optimizes GMM parameter estimation and latent variable inference using a shared Transformer backbone. Theoretically, we prove that Transformers can uniformly approximate both the Expectation-Maximization (EM) algorithm and the cubic tensor power iteration method. Empirically, TGMM significantly outperforms classical approaches: it is initialization-robust, avoids local optima, exhibits robustness to distributional shift, and generalizes to unseen numbers of mixture components and covariance structures. This work establishes the first paradigm for large-model–based unsupervised learning that simultaneously provides rigorous theoretical guarantees and strong empirical performance.

Technology Category

Application Category

📝 Abstract
The transformer architecture has demonstrated remarkable capabilities in modern artificial intelligence, among which the capability of implicitly learning an internal model during inference time is widely believed to play a key role in the under standing of pre-trained large language models. However, most recent works have been focusing on studying supervised learning topics such as in-context learning, leaving the field of unsupervised learning largely unexplored. This paper investigates the capabilities of transformers in solving Gaussian Mixture Models (GMMs), a fundamental unsupervised learning problem through the lens of statistical estimation. We propose a transformer-based learning framework called TGMM that simultaneously learns to solve multiple GMM tasks using a shared transformer backbone. The learned models are empirically demonstrated to effectively mitigate the limitations of classical methods such as Expectation-Maximization (EM) or spectral algorithms, at the same time exhibit reasonable robustness to distribution shifts. Theoretically, we prove that transformers can approximate both the EM algorithm and a core component of spectral methods (cubic tensor power iterations). These results bridge the gap between practical success and theoretical understanding, positioning transformers as versatile tools for unsupervised learning.
Problem

Research questions and friction points this paper is trying to address.

Investigating transformers' capabilities in solving Gaussian Mixture Models (GMMs)
Proposing a transformer-based framework (TGMM) for unsupervised GMM learning
Bridging theoretical and practical gaps in transformers for unsupervised tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformers solve Gaussian Mixture Models unsupervised
TGMM framework learns multiple GMM tasks jointly
Theoretical proof: transformers approximate EM and spectral methods
🔎 Similar Papers
No similar papers found.
Zhiheng Chen
Zhiheng Chen
University of California, Irvine
R
Ruofan Wu
Department of Statistics and Data Science, School of Management, Fudan University
Guanhua Fang
Guanhua Fang
Assistant professor, Fudan University
StatisticsMachine learning