MetaCLBench: Meta Continual Learning Benchmark on Resource-Constrained Edge Devices

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the research gap in meta-continual learning (Meta-CL) for time-series data—particularly audio—on edge devices. We introduce the first cross-modal (image + audio) Meta-CL benchmark framework, systematically evaluating six representative methods (e.g., MAML, Reptile, EWC, iCaRL) across accuracy, computational overhead, and memory footprint. Methodologically, we propose an end-to-end edge-adapted evaluation paradigm—the first to incorporate audio modalities into Meta-CL assessment—and demonstrate that joint pretraining and meta-training boosts average accuracy by 12.3%. Experiments employ lightweight models (MobileNetV2, TCN, TinyCNN) and monitor real-time edge system metrics (CPU/GPU utilization, memory, energy consumption), confirming cross-modal feasibility of diverse Meta-CL approaches while revealing substantial resource overhead. The framework is open-sourced to enable joint evaluation of predictive performance and system-level efficiency.

Technology Category

Application Category

📝 Abstract
Meta-Continual Learning (Meta-CL) has emerged as a promising approach to minimize manual labeling efforts and system resource requirements by enabling Continual Learning (CL) with limited labeled samples. However, while existing methods have shown success in image-based tasks, their effectiveness remains unexplored for sequential time-series data from sensor systems, particularly audio inputs. To address this gap, we conduct a comprehensive benchmark study evaluating six representative Meta-CL approaches using three network architectures on five datasets from both image and audio modalities. We develop MetaCLBench, an end-to-end Meta-CL benchmark framework for edge devices to evaluate system overheads and investigate trade-offs among performance, computational costs, and memory requirements across various Meta-CL methods. Our results reveal that while many Meta-CL methods enable to learn new classes for both image and audio modalities, they impose significant computational and memory costs on edge devices. Also, we find that pre-training and meta-training procedures based on source data before deployment improve Meta-CL performance. Finally, to facilitate further research, we provide practical guidelines for researchers and machine learning practitioners implementing Meta-CL on resource-constrained environments and make our benchmark framework and tools publicly available, enabling fair evaluation across both accuracy and system-level metrics.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Meta-CL methods for sequential time-series data on edge devices
Assessing trade-offs between performance, computational costs, and memory usage
Providing guidelines for Meta-CL implementation in resource-constrained environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarks Meta-CL on edge devices
Evaluates image and audio modalities
Provides pre-training performance guidelines
🔎 Similar Papers
No similar papers found.