🤖 AI Summary
This work addresses the complexity and performance instability arising from integrating diverse accelerator APIs—such as CUDA, SYCL, and Triton—and vendor-specific libraries in heterogeneous systems, where differences in abstraction and synchronization mechanisms hinder efficient development. To overcome these challenges, the authors propose a task-based dataflow model that encapsulates accelerator invocations as first-class tasks via Task-Aware APIs (e.g., TACUDA, TASYCL). These tasks are uniformly scheduled by OpenMP/OmpSs-2 runtimes into directed acyclic graphs (DAGs) and coordinated across runtimes through integration with nOS-V for cross-runtime thread collaboration. This approach enables, for the first time, transparent and efficient coexistence of multiple native accelerator programming models within a single application, significantly simplifying multi-API development while enhancing scalability and performance stability across current and future heterogeneous hardware architectures.
📝 Abstract
Heterogeneous nodes that combine multi-core CPUs with diverse accelerators are rapidly becoming the norm in both high-performance computing (HPC) and AI infrastructures. Exploiting these platforms, however, requires orchestrating several low-level accelerator APIs such as CUDA, SYCL, and Triton. In some occasions they can be combined with optimized vendor math libraries: e.g., cuBLAS and oneAPI. Each API or library introduces its own abstractions, execution semantics, and synchronization mechanisms. Combining them within a single application is therefore error-prone and labor-intensive. We propose reusing a task-based data-flow methodology together with Task-Aware APIs (TA-libs) to overcome these limitations and facilitate the seamless integration of multiple accelerator programming models, while still leveraging the best-in-class kernels offered by each API.
Applications are expressed as a directed acyclic graph (DAG) of host tasks and device kernels managed by an OpenMP/OmpSs-2 runtime. We introduce Task-Aware SYCL (TASYCL) and leverage Task-Aware CUDA (TACUDA), which elevate individual accelerator invocations to first-class tasks. When multiple native runtimes coexist on the same multi-core CPU, they contend for threads, leading to oversubscription and performance variability. To address this, we unify their thread management under the nOS-V tasking and threading library, to which we contribute a new port of the PoCL (Portable OpenCL) runtime.
These results demonstrate that task-aware libraries, coupled with the nOS-V library, enable a single application to harness multiple accelerator programming models transparently and efficiently. The proposed methodology is immediately applicable to current heterogeneous nodes and is readily extensible to future systems that integrate even richer combinations of CPUs, GPUs, FPGAs, and AI accelerators.