🤖 AI Summary
Boolean networks suffer from explosive parameter growth in interconnection layers as input width increases, severely limiting scalability. Method: We propose a parameter-constant, differentiable, and trainable interconnection architecture—the first enabling end-to-end learning for wide-layer Boolean networks. Our approach jointly optimizes structural sparsity and functional equivalence by integrating SAT-solver–based logical equivalence pruning with activation-similarity–driven, data-aware pruning. Contribution/Results: Experiments demonstrate that our method achieves significant model compression—outperforming magnitude-based pruning baselines—while preserving high inference accuracy. It overcomes the fundamental width-scaling bottleneck of conventional Boolean networks, enabling scalable, verifiable, and compact neuro-symbolic models. This work establishes a new paradigm for efficient, formally tractable neural-symbolic integration.
📝 Abstract
Learned Differentiable Boolean Logic Networks (DBNs) already deliver efficient inference on resource-constrained hardware. We extend them with a trainable, differentiable interconnect whose parameter count remains constant as input width grows, allowing DBNs to scale to far wider layers than earlier learnable-interconnect designs while preserving their advantageous accuracy. To further reduce model size, we propose two complementary pruning stages: an SAT-based logic equivalence pass that removes redundant gates without affecting performance, and a similarity-based, data-driven pass that outperforms a magnitude-style greedy baseline and offers a superior compression-accuracy trade-off.