🤖 AI Summary
Tuning the hidden-layer dimension of autoencoders typically relies on computationally expensive grid search, resulting in low efficiency. This paper proposes the Self-Organizing Sparse Autoencoder (SOSAE), which employs physics-inspired structured sparsity regularization to jointly constrain both the size and spatial positions of latent features, enabling dynamic, lossless pruning of the feature space during training. Its core innovations are a position-aware gradient suppression mechanism and a self-organizing regularizer, allowing the network to converge simultaneously to the optimal hidden dimension during parameter optimization. Experiments demonstrate that SOSAE reduces floating-point operations in the dimension-tuning phase by up to 130×—without compromising reconstruction accuracy or downstream task performance—thereby significantly accelerating hyperparameter search.
📝 Abstract
The process of tuning the size of the hidden layers for autoencoders has the benefit of providing optimally compressed representations for the input data. However, such hyper-parameter tuning process would take a lot of computation and time effort with grid search as the default option. In this paper, we introduce the Self-Organization Regularization for Autoencoders that dynamically adapts the dimensionality of the feature space to the optimal size. Inspired by physics concepts, Self-Organizing Sparse AutoEncoder (SOSAE) induces sparsity in feature space in a structured way that permits the truncation of the non-active part of the feature vector without any loss of information. This is done by penalizing the autoencoder based on the magnitude and the positional index of the feature vector dimensions, which during training constricts the feature space in both terms. Extensive experiments on various datasets show that our SOSAE can tune the feature space dimensionality up to 130 times lesser Floating-point Operations (FLOPs) than other baselines while maintaining the same quality of tuning and performance.