Learning with Boolean threshold functions

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training neural networks on Boolean data poses a challenging discrete non-convex optimization problem, for which conventional gradient-based methods are often ineffective. This work proposes a discrete training framework based on Boolean threshold functions, reformulating the problem as complementary constraints enforcing both local functional consistency and global network structural consistency. The resulting formulation is efficiently solved using the reflect–reflect–relax (RRR) projection algorithm. When the decision boundary is sufficiently large, the method learns sparse, interpretable logic gate networks with weights constrained to ±1. Across diverse tasks—including multiplication circuit discovery, binary autoencoding, logical network inference, and cellular automaton learning—the framework consistently achieves either exact solutions or strong generalization performance, significantly outperforming standard gradient-based approaches.

Technology Category

Application Category

📝 Abstract
We develop a method for training neural networks on Boolean data in which the values at all nodes are strictly $\pm 1$, and the resulting models are typically equivalent to networks whose nonzero weights are also $\pm 1$. The method replaces loss minimization with a nonconvex constraint formulation. Each node implements a Boolean threshold function (BTF), and training is expressed through a divide-and-concur decomposition into two complementary constraints: one enforces local BTF consistency between inputs, weights, and output; the other imposes architectural concurrence, equating neuron outputs with downstream inputs and enforcing weight equality across training-data instantiations of the network. The reflect-reflect-relax (RRR) projection algorithm is used to reconcile these constraints. Each BTF constraint includes a lower bound on the margin. When this bound is sufficiently large, the learned representations are provably sparse and equivalent to networks composed of simple logical gates with $\pm 1$ weights. Across a range of tasks -- including multiplier-circuit discovery, binary autoencoding, logic-network inference, and cellular automata learning -- the method achieves exact solutions or strong generalization in regimes where standard gradient-based methods struggle. These results demonstrate that projection-based constraint satisfaction provides a viable and conceptually distinct foundation for learning in discrete neural systems, with implications for interpretability and efficient inference.
Problem

Research questions and friction points this paper is trying to address.

Boolean threshold functions
discrete neural networks
±1 weights
constraint satisfaction
projection-based learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Boolean threshold functions
constraint satisfaction
projection algorithms
discrete neural networks
interpretability
🔎 Similar Papers
No similar papers found.
Veit Elser
Veit Elser
Professor of Physics, Cornell University
condensed matter physics
M
Manish Krishan Lal
Department of Mathematics, Technische Universität München