🤖 AI Summary
Existing digital compute-in-memory (CIM) accelerators suffer from limited microarchitectural flexibility, inflexible dataflows, and coarse-grained pipelining, hindering efficient execution of multimodal Transformer models. To address this, we propose a streaming digital CIM accelerator architecture. Our key contributions are: (1) tile-level reconfigurable CIM macros supporting both conventional and hybrid computation modes; (2) a mixed-resident cross-FFN dataflow that enhances data reuse and memory access efficiency; and (3) a ping-pong fine-grained compute-rewriting pipeline enabling deep overlap between computation rewriting and execution. Evaluated on representative multimodal Transformer models, our design achieves geometric mean speedups of 2.63× and 1.28× over non-streaming and layer-streaming CIM baselines, respectively, while significantly improving energy efficiency and throughput.
📝 Abstract
Multimodal Transformers are emerging artificial intelligence (AI) models designed to process a mixture of signals from diverse modalities. Digital computing-in-memory (CIM) architectures are considered promising for achieving high efficiency while maintaining high accuracy. However, current digital CIM-based accelerators exhibit inflexibility in microarchitecture, dataflow, and pipeline to effectively accelerate multimodal Transformer. In this paper, we propose StreamDCIM, a tile-based streaming digital CIM accelerator for multimodal Transformers. It overcomes the above challenges with three features: First, we present a tile-based reconfigurable CIM macro microarchitecture with normal and hybrid reconfigurable modes to improve intra-macro CIM utilization. Second, we implement a mixed-stationary cross-forwarding dataflow with tile-based execution decoupling to exploit tile-level computation parallelism. Third, we introduce a ping-pong-like fine-grained compute-rewriting pipeline to overlap high-latency on-chip CIM rewriting. Experimental results show that StreamDCIM outperforms non-streaming and layer-based streaming CIM-based solutions by geomean 2.63$ imes$ and 1.28$ imes$ on typical multimodal Transformer models.