NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

📅 2023-04-10
📈 Citations: 7
Influential: 1
📄 PDF
🤖 AI Summary
Neuromorphic computing lacks a unified evaluation standard, hindering objective technology assessment, cross-method comparison, and research directionality. To address this, we propose the first open, collaborative neuromorphic computing benchmarking framework, introducing a novel dual-track evaluation paradigm—“inclusive, iterative, and community-driven”—that concurrently supports algorithm-level (hardware-agnostic) and system-level (hardware-dependent) evaluation. The framework comprises a modular toolchain, a multi-domain task suite (covering speech, vision, and time-series prediction), standardized performance measurement protocols, and an open-source evaluation interface. We release the first version of multi-task baseline results, benchmarking state-of-the-art neuromorphic models alongside conventional AI approaches across accuracy, energy efficiency, and latency. All evaluations are reproducible, cross-platform, and cross-architecture, establishing a foundational infrastructure for standardized neuromorphic computing evaluation.
📝 Abstract
Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of researchers across industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we outline tasks and guidelines for benchmarks across multiple application domains, and present initial performance baselines across neuromorphic and conventional approaches for both benchmark tracks. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community.
Problem

Research questions and friction points this paper is trying to address.

Neuromorphic Computing
Testing Standards
Algorithm Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

NeuroBench
Neuromorphic Computing
Performance Evaluation
Y
Yiğit Demirağ
University of Groningen