Exploiting Subgradient Sparsity in Max-Plus Neural Networks

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard backpropagation in Max-Plus neural networks overlooks the subgradient sparsity induced by the max operation, leading to redundant computations. This work proposes a sparse subgradient optimization algorithm tailored to the non-smooth architecture of Max-Plus networks, explicitly leveraging their inherent sparsity by aligning with the underlying Max-Plus algebraic structure. The method focuses on minimizing the worst-case sample loss and achieves significant computational savings while preserving theoretical convergence guarantees. By doing so, it establishes a new paradigm for efficient and scalable training of interpretable neural networks grounded in Max-Plus operations.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks are powerful tools for solving machine learning problems, but their training often involves dense and costly parameter updates. In this work, we use a novel Max-Plus neural architecture in which classical addition and multiplication are replaced with maximum and summation operations respectively. This is a promising architecture in terms of interpretability, but its training is challenging. A particular feature is that this algebraic structure naturally induces sparsity in the subgradients, as only neurons that contribute to the maximum affect the loss. However, standard backpropagation fails to exploit this sparsity, leading to unnecessary computations. In this work, we focus on the minimization of the worst sample loss which transfers this sparsity to the optimization loss. To address this, we propose a sparse subgradient algorithm that explicitly exploits the algebraic sparsity. By tailoring the optimization procedure to the non-smooth nature of Max-Plus models, our method achieves more efficient updates while retaining theoretical guarantees. This highlights a principled path toward bridging algebraic structure and scalable learning.
Problem

Research questions and friction points this paper is trying to address.

Max-Plus Neural Networks
subgradient sparsity
backpropagation
non-smooth optimization
sparse updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Max-Plus Neural Networks
subgradient sparsity
sparse optimization
non-smooth optimization
worst-case loss minimization
🔎 Similar Papers
No similar papers found.
I
Ikhlas Enaieh
Image, Data, Signal Department (IDS), Telecom Paris, Institut Polytechnique de Paris, France
Olivier Fercoq
Olivier Fercoq
Telecom Paris
Optimisation