🤖 AI Summary
The lack of an open, agile coarse-grained reconfigurable architecture (CGRA) ecosystem hinders hardware–software co-design in AI and edge computing. Method: This paper introduces the first open-source, full-stack CGRA ecosystem, comprising (i) HyCUBE—a multi-hop reconfigurable interconnect architecture; (ii) PACE—a low-power system-on-chip integrating a RISC-V processor; and (iii) Morpher—an open, adaptive framework supporting architectural exploration, compiler optimization, and deployment validation, unified via an abstracted interface to enhance portability and compiler innovation. Contribution/Results: The ecosystem enables cross-layer co-optimization and reproducible research, substantially lowering the barrier to architectural innovation; empirically validates CGRA’s pivotal role in agile hardware development; and advances an open, modular, community-driven spatial computing ecosystem.
📝 Abstract
Modern computing workloads, particularly in AI and edge applications, demand hardware-software co-design to meet aggressive performance and energy targets. Such co-design benefits from open and agile platforms that replace closed, vertically integrated development with modular, community-driven ecosystems. Coarse-Grained Reconfigurable Architectures (CGRAs), with their unique balance of flexibility and efficiency are particularly well-suited for this paradigm. When built on open-source hardware generators and software toolchains, CGRAs provide a compelling foundation for architectural exploration, cross-layer optimization, and real-world deployment. In this paper, we will present an open CGRA ecosystem that we have developed to support agile innovation across the stack. Our contributions include HyCUBE, a CGRA with a reconfigurable single-cycle multi-hop interconnect for efficient data movement; PACE, which embeds a power-efficient HyCUBE within a RISC-V SoC targeting edge computing; and Morpher, a fully open-source, architecture-adaptive CGRA design framework that supports design space exploration, compilation, simulation, and validation. By embracing openness at every layer, we aim to lower barriers to innovation, enable reproducible research, and demonstrate how CGRAs can anchor the next wave of agile hardware development. We will conclude with a call for a unified abstraction layer for CGRAs and spatial accelerators, one that decouples hardware specialization from software development. Such a representation would unlock architectural portability, compiler innovation, and a scalable, open foundation for spatial computing.