Exploring a Principled Framework for Deep Subspace Clustering

πŸ“… 2025-03-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing deep subspace clustering (DSC) methods rely on the strict union-of-subspaces (UoS) assumption and suffer from feature collapse during joint representation and self-expression coefficient learning, lacking theoretical guarantees. This paper proposes PRO-DSCβ€”the first end-to-end DSC framework that simultaneously provides rigorous theoretical guarantees and structured modeling. Its core contribution is the first theoretical proof that regularized self-expression models prevent feature collapse and ensure that optimal representations exactly lie in an orthogonal UoS. PRO-DSC integrates structured representation regularization, differentiable self-expression modeling, and an efficient optimization algorithm. Extensive experiments on multiple benchmark datasets demonstrate significant improvements over state-of-the-art methods. Empirical results consistently validate the theoretical claims, confirming both the efficacy and soundness of the proposed framework. The implementation is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Subspace clustering is a classical unsupervised learning task, built on a basic assumption that high-dimensional data can be approximated by a union of subspaces (UoS). Nevertheless, the real-world data are often deviating from the UoS assumption. To address this challenge, state-of-the-art deep subspace clustering algorithms attempt to jointly learn UoS representations and self-expressive coefficients. However, the general framework of the existing algorithms suffers from a catastrophic feature collapse and lacks a theoretical guarantee to learn desired UoS representation. In this paper, we present a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC), which is designed to learn structured representations and self-expressive coefficients in a unified manner. Specifically, in PRO-DSC, we incorporate an effective regularization on the learned representations into the self-expressive model, prove that the regularized self-expressive model is able to prevent feature space collapse, and demonstrate that the learned optimal representations under certain condition lie on a union of orthogonal subspaces. Moreover, we provide a scalable and efficient approach to implement our PRO-DSC and conduct extensive experiments to verify our theoretical findings and demonstrate the superior performance of our proposed deep subspace clustering approach. The code is available at https://github.com/mengxianghan123/PRO-DSC.
Problem

Research questions and friction points this paper is trying to address.

Addressing deviation from UoS assumption in real-world data
Preventing feature collapse in deep subspace clustering
Learning structured representations with theoretical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for deep subspace clustering
Regularization prevents feature space collapse
Learned representations lie on orthogonal subspaces
πŸ”Ž Similar Papers
No similar papers found.
X
Xianghan Meng
Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China
Z
Zhiyuan Huang
Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China
W
Wei He
Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China
Xianbiao Qi
Xianbiao Qi
Shenzhen Intellifusion Technologies Co., Ltd.
Neural Network OptimizationGenerative ModelsLarge-Scale Pretrain ModelsOCR
R
Rong Xiao
Intellifusion, Shenzhen, P.R. China
Chun-Guang Li
Chun-Guang Li
Associate Professor, Beijing University of Posts and Telecommunications
Subspace ClusteringSelf-Supervised LearningTime Series ModelingBiomedical Engineering