🤖 AI Summary
Large feature sizes in flexible electronics constrain integration density, hindering deployment of high-accuracy, near-sensor machine learning circuits under stringent area and power constraints. Existing SVM hardware supports only linear or RBF kernels individually, resulting in a fundamental trade-off between accuracy and hardware cost. Method: This work proposes the first mixed-kernel, mixed-signal SVM architecture tailored for flexible electronics, jointly optimizing linear/RBF kernel selection and digital/analog computational domain mapping via co-designed training and dynamic resource allocation. Contribution/Results: The architecture breaks the single-kernel accuracy–efficiency trade-off while maintaining low power and small area. Experiments show a 7.7% accuracy improvement over state-of-the-art linear SVMs; compared to fully digital RBF implementations, it reduces area by 108× and power by 17×—significantly enhancing energy efficiency and integration scalability for flexible intelligent sensing systems.
📝 Abstract
Flexible Electronics (FE) have emerged as a promising alternative to silicon-based technologies, offering on-demand low-cost fabrication, conformality, and sustainability. However, their large feature sizes severely limit integration density, imposing strict area and power constraints, thus prohibiting the realization of Machine Learning (ML) circuits, which can significantly enhance the capabilities of relevant near-sensor applications. Support Vector Machines (SVMs) offer high accuracy in such applications at relatively low computational complexity, satisfying FE technologies'constraints. Existing SVM designs rely solely on linear or Radial Basis Function (RBF) kernels, forcing a trade-off between hardware costs and accuracy. Linear kernels, implemented digitally, minimize overhead but sacrifice performance, while the more accurate RBF kernels are prohibitively large in digital, and their analog realization contains inherent functional approximation. In this work, we propose the first mixed-kernel and mixed-signal SVM design in FE, which unifies the advantages of both implementations and balances the cost/accuracy trade-off. To that end, we introduce a co-optimization approach that trains our mixed-kernel SVMs and maps binary SVM classifiers to the appropriate kernel (linear/RBF) and domain (digital/analog), aiming to maximize accuracy whilst reducing the number of costly RBF classifiers. Our designs deliver 7.7% higher accuracy than state-of-the-art single-kernel linear SVMs, and reduce area and power by 108x and 17x on average compared to digital RBF implementations.