ElfCore: A 28nm Neural Processor Enabling Dynamic Structured Sparse Training and Online Self-Supervised Learning with Activity-Dependent Weight Update

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of on-device closed-loop learning for event-driven sensory signal processing, this paper introduces ElfCore—the first 28 nm digital spiking neural network (SNN) processor. ElfCore innovatively integrates three core capabilities: online temporal self-supervised learning, dynamic structured sparsity training, and activity-dependent sparse weight updates. It employs event-driven computation, localized temporal learning engines, a structured pruning–retraining mechanism, and activity-aware update logic. Evaluated on gesture recognition, speech, and biomedical signal tasks, ElfCore achieves a 16× reduction in power consumption, a 3.8× decrease in on-chip memory usage, and a 5.9× improvement in network capacity efficiency over baseline approaches. These advances enable truly on-device, label-free, low-overhead closed-loop learning—marking a significant step toward energy-efficient, adaptive neuromorphic edge intelligence.

Technology Category

Application Category

📝 Abstract
In this paper, we present ElfCore, a 28nm digital spiking neural network processor tailored for event-driven sensory signal processing. ElfCore is the first to efficiently integrate: (1) a local online self-supervised learning engine that enables multi-layer temporal learning without labeled inputs; (2) a dynamic structured sparse training engine that supports high-accuracy sparse-to-sparse learning; and (3) an activity-dependent sparse weight update mechanism that selectively updates weights based solely on input activity and network dynamics. Demonstrated on tasks including gesture recognition, speech, and biomedical signal processing, ElfCore outperforms state-of-the-art solutions with up to 16X lower power consumption, 3.8X reduced on-chip memory requirements, and 5.9X greater network capacity efficiency.
Problem

Research questions and friction points this paper is trying to address.

Enables dynamic structured sparse training for efficient learning
Integrates online self-supervised learning without labeled inputs
Implements activity-dependent weight updates to reduce power and memory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local online self-supervised learning without labeled inputs
Dynamic structured sparse training for high-accuracy learning
Activity-dependent sparse weight update based on input activity
🔎 Similar Papers
No similar papers found.