Revisiting Incremental Stochastic Majorization-Minimization Algorithms with Applications to Mixture of Experts

📅 2026-01-27
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of scaling traditional batch algorithms to high-throughput streaming data and overcomes limitations of existing incremental stochastic methods—such as Expectation-Maximization—that rely on explicit latent variable modeling and thus struggle to generalize to complex architectures like Softmax-gated Mixture-of-Experts (MoE). The authors propose an incremental stochastic Majorization-Minimization (MM) algorithm that relaxes structural assumptions on latent variables, yielding a flexible online optimization framework. By circumventing an explicit E-step and accommodating arbitrary constructible surrogate functions, the method enables the first viable incremental learning scheme for Softmax-gated MoE models. Empirical evaluations on both synthetic and real-world datasets—including maize proteomic and ecophysiological data under drought stress—demonstrate consistently superior predictive performance compared to SGD, RMSProp, Adam, and second-order clipped stochastic optimizers.

Technology Category

Application Category

📝 Abstract
Processing high-volume, streaming data is increasingly common in modern statistics and machine learning, where batch-mode algorithms are often impractical because they require repeated passes over the full dataset. This has motivated incremental stochastic estimation methods, including the incremental stochastic Expectation-Maximization (EM) algorithm formulated via stochastic approximation. In this work, we revisit and analyze an incremental stochastic variant of the Majorization-Minimization (MM) algorithm, which generalizes incremental stochastic EM as a special case. Our approach relaxes key EM requirements, such as explicit latent-variable representations, enabling broader applicability and greater algorithmic flexibility. We establish theoretical guarantees for the incremental stochastic MM algorithm, proving consistency in the sense that the iterates converge to a stationary point characterized by a vanishing gradient of the objective. We demonstrate these advantages on a softmax-gated mixture of experts (MoE) regression problem, for which no stochastic EM algorithm is available. Empirically, our method consistently outperforms widely used stochastic optimizers, including stochastic gradient descent, root mean square propagation, adaptive moment estimation, and second-order clipped stochastic optimization. These results support the development of new incremental stochastic algorithms, given the central role of softmax-gated MoE architectures in contemporary deep neural networks for heterogeneous data modeling. Beyond synthetic experiments, we also validate practical effectiveness on two real-world datasets, including a bioinformatics study of dent maize genotypes under drought stress that integrates high-dimensional proteomics with ecophysiological traits, where incremental stochastic MM yields stable gains in predictive performance.
Problem

Research questions and friction points this paper is trying to address.

incremental stochastic optimization
Majorization-Minimization
Mixture of Experts
streaming data
latent-variable-free models
Innovation

Methods, ideas, or system contributions that make the work stand out.

incremental stochastic MM
mixture of experts
stochastic approximation
non-EM optimization
streaming data
T
TrungKhang Tran
School of Computing, National University of Singapore, Singapore
TrungTin Nguyen
TrungTin Nguyen
Postdoctoral Research Fellow at Queensland University of Technology, Australia
Artificial IntelligenceStatisticsMachine LearningMixture ModellingClustering Techniques
G
G. Fort
Laboratoire d’Analyse et d’Architecture des Systèmes, CNRS, Toulouse, France
T
T. Doan
School of Medicine and Dentistry, Griffith University, Brisbane, Australia
H
H. Nguyen
Department of Mathematics and Physical Sciences, La Trobe University, Melbourne Australia; Institute of Mathematics for Industry, Kyushu University, Fukuoka, Japan
Binh T. Nguyen
Binh T. Nguyen
VinUniversity
statisticsoptimal transport
Florence Forbes
Florence Forbes
Director of Research, INRIA, Grenoble Rhone-Alpes
StatisticsBayesian image processingClustering techniquesMarkov random fieldsMixture models
C
Christopher C. Drovandi
ARC Centre of Excellence for the Mathematical Analysis of Cellular Systems; School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia