đ¤ AI Summary
This work addresses the challenge of scaling traditional batch algorithms to high-throughput streaming data and overcomes limitations of existing incremental stochastic methodsâsuch as Expectation-Maximizationâthat rely on explicit latent variable modeling and thus struggle to generalize to complex architectures like Softmax-gated Mixture-of-Experts (MoE). The authors propose an incremental stochastic Majorization-Minimization (MM) algorithm that relaxes structural assumptions on latent variables, yielding a flexible online optimization framework. By circumventing an explicit E-step and accommodating arbitrary constructible surrogate functions, the method enables the first viable incremental learning scheme for Softmax-gated MoE models. Empirical evaluations on both synthetic and real-world datasetsâincluding maize proteomic and ecophysiological data under drought stressâdemonstrate consistently superior predictive performance compared to SGD, RMSProp, Adam, and second-order clipped stochastic optimizers.
đ Abstract
Processing high-volume, streaming data is increasingly common in modern statistics and machine learning, where batch-mode algorithms are often impractical because they require repeated passes over the full dataset. This has motivated incremental stochastic estimation methods, including the incremental stochastic Expectation-Maximization (EM) algorithm formulated via stochastic approximation. In this work, we revisit and analyze an incremental stochastic variant of the Majorization-Minimization (MM) algorithm, which generalizes incremental stochastic EM as a special case. Our approach relaxes key EM requirements, such as explicit latent-variable representations, enabling broader applicability and greater algorithmic flexibility. We establish theoretical guarantees for the incremental stochastic MM algorithm, proving consistency in the sense that the iterates converge to a stationary point characterized by a vanishing gradient of the objective. We demonstrate these advantages on a softmax-gated mixture of experts (MoE) regression problem, for which no stochastic EM algorithm is available. Empirically, our method consistently outperforms widely used stochastic optimizers, including stochastic gradient descent, root mean square propagation, adaptive moment estimation, and second-order clipped stochastic optimization. These results support the development of new incremental stochastic algorithms, given the central role of softmax-gated MoE architectures in contemporary deep neural networks for heterogeneous data modeling. Beyond synthetic experiments, we also validate practical effectiveness on two real-world datasets, including a bioinformatics study of dent maize genotypes under drought stress that integrates high-dimensional proteomics with ecophysiological traits, where incremental stochastic MM yields stable gains in predictive performance.