Hybrid unary-binary design for multiplier-less printed Machine Learning classifiers

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Printed electronics (PE) face significant challenges in implementing complex machine learning (ML) classifiers efficiently due to their large feature sizes, limiting their deployment in low-cost, flexible hardware. To address this, we propose a multiplier-free, hybrid unipolar/bipolar computing architecture specifically designed for PE, eliminating conventional encoders and integrating architecture-aware training for end-to-end hardware-algorithm co-optimization. This approach drastically reduces circuit complexity and hardware overhead while enabling energy-efficient, low-power hardware implementation of multilayer perceptrons (MLPs). Evaluated on six benchmark datasets, our design achieves, on average, 46% reduction in area and 39% reduction in power consumption compared to the state-of-the-art PE-based MLP implementations, with negligible accuracy degradation. The proposed architecture establishes a scalable hardware paradigm for edge intelligence enabled by printed electronics.

Technology Category

Application Category

📝 Abstract
Printed Electronics (PE) provide a flexible, cost-efficient alternative to silicon for implementing machine learning (ML) circuits, but their large feature sizes limit classifier complexity. Leveraging PE's low fabrication and NRE costs, designers can tailor hardware to specific ML models, simplifying circuit design. This work explores alternative arithmetic and proposes a hybrid unary-binary architecture that removes costly encoders and enables efficient, multiplier-less execution of MLP classifiers. We also introduce architecture-aware training to further improve area and power efficiency. Evaluation on six datasets shows average reductions of 46% in area and 39% in power, with minimal accuracy loss, surpassing other state-of-the-art MLP designs.
Problem

Research questions and friction points this paper is trying to address.

Hybrid unary-binary architecture enables multiplier-less MLP execution
Removes costly encoders to simplify printed electronics classifiers
Addresses area and power efficiency limitations in printed ML circuits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid unary-binary multiplier-less architecture
Architecture-aware training for efficiency optimization
Printed Electronics MLP classifier implementation
🔎 Similar Papers
No similar papers found.
G
Giorgos Armeniakos
National and Technical University of Athens, Greece
T
Theodoros Mantzakidis
National and Technical University of Athens, Greece
Dimitrios Soudris
Dimitrios Soudris
Professor, Electrical & Computer Eng., National Technical Univ. of Athens
Embedded SystemsFPGAsAcceleratorsEdge & Cloud ComputingHigh Performance Computing