Gate-level boolean evolutionary geometric attention neural networks

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously achieving model interpretability and hardware efficiency in image processing. We propose a fully Boolean-domain neural network: images are modeled as Boolean fields on a 2D geometric manifold, with pixels treated as Boolean variables; XNOR-based Boolean self-attention and Boolean rotary position encoding (RoPE) are introduced, integrated with a Boolean reaction–diffusion mechanism and trainable gate-level logic kernels for information propagation and update; continuous relaxation enables end-to-end differentiable training. The architecture retains the expressive power of convolutional and attention-based models while operating entirely in the Boolean domain, significantly enhancing digital circuit compatibility and inference speed. Experiments demonstrate high interpretability, ultra-low power consumption, and exceptional hardware acceleration potential—achieving a unified balance between theoretical rigor and practical deployment efficiency.

Technology Category

Application Category

📝 Abstract
This paper presents a gate-level Boolean evolutionary geometric attention neural network that models images as Boolean fields governed by logic gates. Each pixel is a Boolean variable (0 or 1) embedded on a two-dimensional geometric manifold (for example, a discrete toroidal lattice), which defines adjacency and information propagation among pixels. The network updates image states through a Boolean reaction-diffusion mechanism: pixels receive Boolean diffusion from neighboring pixels (diffusion process) and perform local logic updates via trainable gate-level logic kernels (reaction process), forming a reaction-diffusion logic network. A Boolean self-attention mechanism is introduced, using XNOR-based Boolean Query-Key (Q-K) attention to modulate neighborhood diffusion pathways and realize logic attention. We also propose Boolean Rotary Position Embedding (RoPE), which encodes relative distances by parity-bit flipping to simulate Boolean ``phase''offsets. The overall structure resembles a Transformer but operates entirely in the Boolean domain. Trainable parameters include Q-K pattern bits and gate-level kernel configurations. Because outputs are discrete, continuous relaxation methods (such as sigmoid approximation or soft-logic operators) ensure differentiable training. Theoretical analysis shows that the network achieves universal expressivity, interpretability, and hardware efficiency, capable of reproducing convolutional and attention mechanisms. Applications include high-speed image processing, interpretable artificial intelligence, and digital hardware acceleration, offering promising future research directions.
Problem

Research questions and friction points this paper is trying to address.

Modeling images as Boolean fields using logic gates
Implementing Boolean self-attention via XNOR-based diffusion modulation
Achieving hardware efficiency through discrete Boolean domain operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Boolean evolutionary geometric attention neural networks
Boolean reaction-diffusion mechanism with logic gates
XNOR-based Boolean self-attention with Rotary Position Embedding
🔎 Similar Papers
No similar papers found.
X
Xianshuai Shi
School of Integrated Circuits, Tsinghua University
J
Jianfeng Zhu
School of Integrated Circuits, Tsinghua University
Leibo Liu
Leibo Liu
Prof. of Institute of Microelectronics, Tsinghua University
Reconfigurable ComputingHardware Security and Cryptographic Processing