SOSAE: Self-Organizing Sparse AutoEncoder

📅 2025-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tuning the hidden-layer dimension of autoencoders typically relies on computationally expensive grid search, resulting in low efficiency. This paper proposes the Self-Organizing Sparse Autoencoder (SOSAE), which employs physics-inspired structured sparsity regularization to jointly constrain both the size and spatial positions of latent features, enabling dynamic, lossless pruning of the feature space during training. Its core innovations are a position-aware gradient suppression mechanism and a self-organizing regularizer, allowing the network to converge simultaneously to the optimal hidden dimension during parameter optimization. Experiments demonstrate that SOSAE reduces floating-point operations in the dimension-tuning phase by up to 130×—without compromising reconstruction accuracy or downstream task performance—thereby significantly accelerating hyperparameter search.

Technology Category

Application Category

📝 Abstract
The process of tuning the size of the hidden layers for autoencoders has the benefit of providing optimally compressed representations for the input data. However, such hyper-parameter tuning process would take a lot of computation and time effort with grid search as the default option. In this paper, we introduce the Self-Organization Regularization for Autoencoders that dynamically adapts the dimensionality of the feature space to the optimal size. Inspired by physics concepts, Self-Organizing Sparse AutoEncoder (SOSAE) induces sparsity in feature space in a structured way that permits the truncation of the non-active part of the feature vector without any loss of information. This is done by penalizing the autoencoder based on the magnitude and the positional index of the feature vector dimensions, which during training constricts the feature space in both terms. Extensive experiments on various datasets show that our SOSAE can tune the feature space dimensionality up to 130 times lesser Floating-point Operations (FLOPs) than other baselines while maintaining the same quality of tuning and performance.
Problem

Research questions and friction points this paper is trying to address.

Dynamically optimizes autoencoder hidden layer size
Reduces computation effort for hyper-parameter tuning
Maintains performance while minimizing feature space FLOPs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Organization Regularization dynamically adapts feature space
Structured sparsity enables truncation without information loss
Reduces FLOPs by 130x while maintaining performance
🔎 Similar Papers
No similar papers found.
S
Sarthak Ketanbhai Modi
Nanyang Technological University
Z
Zi Pong Lim
Continental Automotive Singapore
Yushi Cao
Yushi Cao
Nanyang Technological University
Deep Reinforcement LearningTrustworthy AI
Y
Yupeng Cheng
Nanyang Technological University
Y
Yon Shin Teo
Continental Automotive Singapore
S
Shang-Wei Lin
Nanyang Technological University