Convexity in ReLU Neural Networks: beyond ICNNs?

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
ReLU neural networks suffer from inaccurate convexity modeling in mathematical imaging tasks—such as optimization-based reconstruction and optimal transport—limiting their theoretical reliability and practical applicability. Method: We establish the first necessary and sufficient condition for convexity of arbitrary-depth ReLU networks, revealing that single-hidden-layer networks are equivalent to input-convex neural networks (ICNNs), whereas deeper ICNNs exhibit fundamental representational limitations. We develop a general convexity certification framework grounded in weight product analysis and activation pattern enumeration, integrating path-augmentation techniques, piecewise-affine modeling, and convex analysis. Furthermore, we design a scalable, exact convexity verification algorithm. Results: Our approach enables efficient convexity certification for large-scale piecewise-affine ReLU networks—achieving the first such capability—thereby transcending architectural constraints of ICNNs and providing both theoretical foundations and computational tools for trustworthy deployment of deep learning in convex optimization-driven imaging.

Technology Category

Application Category

📝 Abstract
Convex functions and their gradients play a critical role in mathematical imaging, from proximal optimization to Optimal Transport. The successes of deep learning has led many to use learning-based methods, where fixed functions or operators are replaced by learned neural networks. Regardless of their empirical superiority, establishing rigorous guarantees for these methods often requires to impose structural constraints on neural architectures, in particular convexity. The most popular way to do so is to use so-called Input Convex Neural Networks (ICNNs). In order to explore the expressivity of ICNNs, we provide necessary and sufficient conditions for a ReLU neural network to be convex. Such characterizations are based on product of weights and activations, and write nicely for any architecture in the path-lifting framework. As particular applications, we study our characterizations in depth for 1 and 2-hidden-layer neural networks: we show that every convex function implemented by a 1-hidden-layer ReLU network can be also expressed by an ICNN with the same architecture; however this property no longer holds with more layers. Finally, we provide a numerical procedure that allows an exact check of convexity for ReLU neural networks with a large number of affine regions.
Problem

Research questions and friction points this paper is trying to address.

ReLU neural networks
convexity verification
optimization algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReLU Neural Networks
Convexity Detection
ICNNs Enhancement
🔎 Similar Papers
No similar papers found.