🤖 AI Summary
Standard backpropagation in Max-Plus neural networks overlooks the subgradient sparsity induced by the max operation, leading to redundant computations. This work proposes a sparse subgradient optimization algorithm tailored to the non-smooth architecture of Max-Plus networks, explicitly leveraging their inherent sparsity by aligning with the underlying Max-Plus algebraic structure. The method focuses on minimizing the worst-case sample loss and achieves significant computational savings while preserving theoretical convergence guarantees. By doing so, it establishes a new paradigm for efficient and scalable training of interpretable neural networks grounded in Max-Plus operations.
📝 Abstract
Deep Neural Networks are powerful tools for solving machine learning problems, but their training often involves dense and costly parameter updates. In this work, we use a novel Max-Plus neural architecture in which classical addition and multiplication are replaced with maximum and summation operations respectively. This is a promising architecture in terms of interpretability, but its training is challenging. A particular feature is that this algebraic structure naturally induces sparsity in the subgradients, as only neurons that contribute to the maximum affect the loss. However, standard backpropagation fails to exploit this sparsity, leading to unnecessary computations. In this work, we focus on the minimization of the worst sample loss which transfers this sparsity to the optimization loss. To address this, we propose a sparse subgradient algorithm that explicitly exploits the algebraic sparsity. By tailoring the optimization procedure to the non-smooth nature of Max-Plus models, our method achieves more efficient updates while retaining theoretical guarantees. This highlights a principled path toward bridging algebraic structure and scalable learning.