🤖 AI Summary
This work addresses the limited interpretability of deep ReLU networks by establishing, for the first time, a rigorous formal correspondence between such networks and Łukasiewicz infinite-valued logic (MV logic). We propose a symbolic extraction algorithm that, without modifying network architecture or requiring retraining, directly synthesizes an equivalent MV-logic formula from any black-box deep ReLU network with real-valued weights. Our method integrates MV-logic theory, analytical characterization of ReLU’s piecewise-linear behavior, and multi-valued logic circuit modeling—thereby overcoming key limitations of prior neuro-symbolic approaches, which typically rely on weight discretization or restricted network topologies. Experimental evaluation confirms the algorithm’s efficacy on networks with genuine real-valued parameters. This yields the first formal interpretability framework for deep learning grounded in infinite-valued logic, uniquely combining theoretical soundness with practical extractability.
📝 Abstract
We propose a new perspective on deep ReLU networks, namely as circuit counterparts of Lukasiewicz infinite-valued logic -- a many-valued (MV) generalization of Boolean logic. An algorithm for extracting formulae in MV logic from deep ReLU networks is presented. As the algorithm applies to networks with general, in particular also real-valued, weights, it can be used to extract logical formulae from deep ReLU networks trained on data.