Weight Mapping Properties of a Dual Tree Single Clock Adiabatic Capacitive Neuron

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the hardware mapping challenge from artificial neural networks (ANNs) to dual-tree single-clock adiabatic capacitive neuron (DTSC ACN) circuits. We propose a joint optimization framework for weight quantization and capacitive parameter mapping. By analyzing the impact of weight quantization on comparator decision accuracy and integrated circuit (IC) area, we formulate a co-design metric balancing layout efficiency and classification performance. Leveraging TensorFlow/Larq for ANN training, our method achieves functionally equivalent conversion from synaptic weights to physical capacitance values. Experimental evaluation across three representative ANN architectures demonstrates: (1) 100% logical functional equivalence; (2) average chip area reduction of 32.7%; and (3) image classification accuracy improvement of 1.8–2.4 percentage points. These results significantly enhance the energy efficiency and integration density of adiabatic neuromorphic chips.

Technology Category

Application Category

📝 Abstract
Dual Tree Single Clock (DTSC) Adiabatic Capacitive Neuron (ACN) circuits offer the potential for highly energy-efficient Artificial Neural Network (ANN) computation in full custom analog IC designs. The efficient mapping of Artificial Neuron (AN) abstract weights, extracted from the software-trained ANNs, onto physical ACN capacitance values has, however, yet to be fully researched. In this paper, we explore the unexpected hidden complexities, challenges and properties of the mapping, as well as, the ramifications for IC designers in terms accuracy, design and implementation. We propose an optimal, AN to ACN methodology, that promotes smaller chip sizes and improved overall classification accuracy, necessary for successful practical deployment. Using TensorFlow and Larq software frameworks, we train three different ANN networks and map their weights into the energy-efficient DTSC ACN capacitance value domain to demonstrate 100% functional equivalency. Finally, we delve into the impact of weight quantization on ACN performance using novel metrics related to practical IC considerations, such as IC floor space and comparator decision-making efficacy.
Problem

Research questions and friction points this paper is trying to address.

Mapping software-trained neural network weights to physical capacitance values
Addressing hidden complexities in weight mapping for energy-efficient analog ICs
Investigating weight quantization impact on chip size and classification accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes optimal methodology mapping artificial neuron weights
Demonstrates 100% functional equivalency using TensorFlow and Larq
Analyzes weight quantization impact with novel IC metrics
🔎 Similar Papers
No similar papers found.
M
Mike Smart
Centre for Electronics Frontiers, School of Engineering, University of Edinburgh, Scotland, EH9 3FB, UK
Sachin Maheshwari
Sachin Maheshwari
Research Associate, Centre for Electronics Frontiers, University of Edinburgh
Integrated Circuits DesignEnergy Recovery LogicNeural NetworkCrossbar Arrays
Himadri Singh Raghav
Himadri Singh Raghav
Research Fellow, National University of Singapore
Hardware SecurityEnergy Recovery Logic
A
Alexander Serb
Centre for Electronics Frontiers, School of Engineering, University of Edinburgh, Scotland, EH9 3FB, UK