🤖 AI Summary
To address high latency, excessive power consumption, and privacy leakage in edge-based real-time signal processing for wearable ultrasound devices, this work proposes the first RISC-V vector–tensor unit (VTU) tightly integrated with an in-memory multi-precision FFT accelerator. Implemented in 65 nm CMOS, the SoC integrates a custom VTU and a 16/32-bit floating-point FFT accelerator, enabling full-stack on-chip execution—from ultrasound preprocessing to machine learning–based postprocessing. Evaluated on gesture recognition, the design achieves 298.03 GFLOPS/W CNN energy efficiency—five times higher than state-of-the-art SoCs—while consuming only 2.5 mJ per inference at 12 mW peak power. Key innovations include: (i) a hardware–software co-optimized VTU–FFT coupling architecture, and (ii) ultrasound-specific optimizations across the signal chain, including data layout, precision scaling, and memory access patterns. This holistic approach enables ultra-low-power, private, and real-time ultrasound analytics at the wearable edge.
📝 Abstract
Most Wearable Ultrasound (WUS) devices lack the computational power to process signals at the edge, instead relying on remote offload, which introduces latency, high power consumption, and privacy concerns. We present Maestro, a RISC-V SoC with unified Vector-Tensor Unit (VTU) and memory-coupled Fast Fourier Transform (FFT) accelerators targeting edge processing for wearable ultrasound devices, fabricated using low-cost TSMC 65nm CMOS technology. The VTU achieves peak 302GFLOPS/W and 19.8GFLOPS at FP16, while the multi-precision 16/32-bit floating-point FFT accelerator delivers peak 60.6GFLOPS/W and 3.6GFLOPS at FP16, We evaluate Maestro on a US-based gesture recognition task, achieving 1.62GFLOPS in signal processing at 26.68GFLOPS/W, and 19.52GFLOPS in Convolutional Neural Network (CNN) workloads at 298.03GFLOPS/W. Compared to a state-of-the-art SoC with a similar mission profile, Maestro achieves a 5x speedup while consuming only 12mW, with an energy consumption of 2.5mJ in a wearable US channel preprocessing and ML-based postprocessing pipeline.