🤖 AI Summary
This work addresses the excessive off-chip configuration overhead in modern reconfigurable AI accelerators caused by fine-grained control. To mitigate this, the authors propose MINISA, a minimal instruction set architecture based on virtual neuron (VN) granularity that abstracts the control logic of FEATHER+ hardware. MINISA achieves flexible yet low-overhead configuration using only three layout instructions—for inputs, weights, and outputs—and a single dataflow mapping instruction. By elevating the control granularity to the VN level for the first time, it drastically reduces instruction count while preserving expressiveness. Evaluated across 50 GEMM workloads, MINISA reduces off-chip instruction traffic by a geometric mean of 35× to 4×10⁵×, eliminates 96.9% of instruction fetch stalls, and delivers up to 31.6× end-to-end speedup.
📝 Abstract
Modern reconfigurable AI accelerators rely on rich mapping and data-layout flexibility to sustain high utilization across matrix multiplication, convolution, and emerging applications beyond AI. However, exposing this flexibility through fine-grained micro-control results in prohibitive control overhead of fetching configuration bits from off-chip memory. This paper presents MINISA, a minimal instruction set that programs a reconfigurable accelerator at the granularity of Virtual Neurons (VNs), the coarsest control granularity that retains flexibility of hardware and the finest granularity that avoids unnecessary control costs. First, we introduce FEATHER+, a modest refinement of FEATHER, that eliminates redundant on-chip replication needed for runtime dataflow/layout co-switching and supports dynamic cases where input and weight data are unavailable before execution for offline layout manipulation. MINISA then abstracts control of FEATHER+ into three layout-setting instructions for input, weight, and output VNs and a single mapping instruction for setting dataflow. This reduces the control and instruction footprint while preserving the legal mapping and layout space supported by the FEATHER+. Our results show that MINISA reduces geometric mean off-chip instruction traffic by factors ranging from 35x to (4x10^5)x under various sizes under 50 GEMM workloads spanning AI (GPT-oss), FHE, and ZKP. This eliminates instruction-fetch stalls that consume 96.9% of micro-instruction cycles, yielding up to 31.6x end-to-end speedup for 16x256 FEATHER+. Our code: https://github.com/maeri-project/FEATHER/tree/main/minisa.