GPS: Distilling Compact Memories via Grid-based Patch Sampling for Efficient Online Class-Incremental Learning

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe catastrophic forgetting, stringent memory constraints, and high computational overhead of replay in online class-incremental learning, this paper proposes a parameter-free, gradient-free gridded image patch sampling method. It generates low-resolution memory samples via sparse pixel sampling, significantly enhancing information density per storage unit while preserving semantic and structural integrity. The core innovation is a lightweight sampling mechanism that eliminates both bilevel optimization and trainable models—enabling zero-parameter memory distillation. The method integrates seamlessly into mainstream replay frameworks without imposing additional training overhead. Evaluated on multiple benchmarks, it achieves an average 3–4% improvement in final accuracy, maintains identical memory footprint, and incurs negligible computational cost.

Technology Category

Application Category

📝 Abstract
Online class-incremental learning aims to enable models to continuously adapt to new classes with limited access to past data, while mitigating catastrophic forgetting. Replay-based methods address this by maintaining a small memory buffer of previous samples, achieving competitive performance. For effective replay under constrained storage, recent approaches leverage distilled data to enhance the informativeness of memory. However, such approaches often involve significant computational overhead due to the use of bi-level optimization. Motivated by these limitations, we introduce Grid-based Patch Sampling (GPS), a lightweight and effective strategy for distilling informative memory samples without relying on a trainable model. GPS generates informative samples by sampling a subset of pixels from the original image, yielding compact low-resolution representations that preserve both semantic content and structural information. During replay, these representations are reassembled to support training and evaluation. Experiments on extensive benchmarks demonstrate that GRS can be seamlessly integrated into existing replay frameworks, leading to 3%-4% improvements in average end accuracy under memory-constrained settings, with limited computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Enable models to adapt to new classes with limited past data access
Mitigate catastrophic forgetting in online class-incremental learning
Reduce computational overhead in memory distillation for efficient replay
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grid-based Patch Sampling for compact memories
Lightweight distillation without trainable model
Preserves semantic content and structural information
🔎 Similar Papers
No similar papers found.
M
Mingchuan Ma
Sichuan University, Chengdu, Sichuan, China
Y
Yuhao Zhou
Sichuan University, Chengdu, Sichuan, China
Jindi Lv
Jindi Lv
Sichuan university
deep learningneural architecture searchmultimodal
Yuxin Tian
Yuxin Tian
Ph.d Candidate, Sichuan University
Deep LearningMachine Learning
D
Dan Si
Sichuan University, Chengdu, Sichuan, China
S
Shujian Li
Sichuan University, Chengdu, Sichuan, China
Qing Ye
Qing Ye
四川大学
Jiancheng Lv
Jiancheng Lv
University of Science and Technology of China
Operations ManagementMarketing