TsetlinWiSARD: On-Chip Training of Weightless Neural Networks using Tsetlin Automata on FPGAs

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes TsetlinWiSARD, a novel weightless neural network (WNN) architecture that overcomes the limitations of conventional WNNs—such as overfitting, low accuracy, and inefficiency in on-chip learning—by replacing their one-shot, memory-based training with an iterative, probabilistic binary feedback mechanism powered by Tsetlin automata. For the first time, Tsetlin automata are integrated into WNN training to enable continuous optimization. A dedicated FPGA-based hardware implementation combines the Tsetlin automata with the WiSARD model, leveraging binary representations and lookup-table structures to achieve remarkable efficiency gains: training speed exceeds that of traditional WiSARD by over 1,000×, while reducing resource utilization by 22%, latency by 93.3%, and power consumption by 64.2% compared to existing FPGA accelerators.

Technology Category

Application Category

📝 Abstract
Increasing demands for adaptability, privacy, and security at the edge have persistently pushed the frontiers for a new generation of machine learning (ML) algorithms with training and inference capabilities on-chip. Weightless Neural Network (WNN) is such an algorithm that is principled on lookup table based simple neuron structures. As a result, it offers architectural benefits, such as low-latency, low-complexity inference, compared to deep neural networks that depend heavily on multiply-accumulate operations. However, traditional WNNs rely on memorization-based one-shot training, which either leads to overfitting and reduced accuracy or requires tedious post-training adjustments, limiting their effectiveness for efficient on chip training. In this work, we propose TsetlinWiSARD, a training approach for WNNs that leverages Tsetlin Automata (TAs) to enable probabilistic, feedback-driven learning. It overcomes the overfitting of WiSARD's one-shot training with iterative optimization, while maintaining simple, continuous binary feedback for efficient on-chip training. Central to our approach is a field programmable gate array (FPGA)-based training architecture that delivers state-of-the-art accuracy while significantly improving hardware efficiency. Our approach provides over 1000x faster training when compared with the traditional WiSARD implementation of WNNs. Further, we demonstrate 22% reduced resource usage, 93.3% lower latency, and 64.2% lower power consumption compared to FPGA-based training accelerators implementing other ML algorithms.
Problem

Research questions and friction points this paper is trying to address.

Weightless Neural Networks
on-chip training
overfitting
one-shot training
hardware efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tsetlin Automata
Weightless Neural Networks
On-chip Training
FPGA
WiSARD
🔎 Similar Papers
No similar papers found.
Shengyu Duan
Shengyu Duan
Newcastle University
M
Marcos L. L. Sartori
Microsystems Research Group, Newcastle University
Rishad Shafik
Rishad Shafik
Professor of Microelectronic Systems, Newcastle University, UK
Machine Learning HardwareEnergy-Aware ComputingHW/SW Co-design
A
Alex Yakovlev
Microsystems Research Group, Newcastle University; Literal Labs, Newcastle upon Tyne, UK