🤖 AI Summary
To address the challenge of balancing formal verifiability and predictive performance in neural networks, this paper proposes Logic-Gate Neural Networks (LGNs), which replace conventional multiplicative units with Boolean logic gates to yield sparse, netlist-style, inherently interpretable architectures. We introduce the first SAT-based encoding scheme for LGNs tailored to global robustness and fairness verification, enabling efficient symbolic reasoning. Theoretical analysis and empirical evaluation across five benchmark datasets—including a newly constructed five-class task—demonstrate that LGNs achieve a significantly improved trade-off between verifiability and accuracy: formal verification throughput improves by one to two orders of magnitude over state-of-the-art DNNs, while maintaining competitive prediction accuracy. Our core contributions are: (1) a verifiability-prioritized architectural design; (2) the first dedicated SAT encoding framework for LGNs; and (3) a unified verification paradigm jointly supporting robustness and fairness guarantees.
📝 Abstract
Learning-based systems are increasingly deployed across various domains, yet the complexity of traditional neural networks poses significant challenges for formal verification. Unlike conventional neural networks, learned Logic Gate Networks (LGNs) replace multiplications with Boolean logic gates, yielding a sparse, netlist-like architecture that is inherently more amenable to symbolic verification, while still delivering promising performance. In this paper, we introduce a SAT encoding for verifying global robustness and fairness in LGNs. We evaluate our method on five benchmark datasets, including a newly constructed 5-class variant, and find that LGNs are both verification-friendly and maintain strong predictive performance.