WARP-LCA: Efficient Convolutional Sparse Coding with Locally Competitive Algorithm

📅 2024-10-24
🏛️ Neurocomputing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional Locally Competitive Algorithms (LCA) for Convolutional Sparse Coding (CSC) suffer from slow convergence, non-convex optimization induced by hard thresholding, and poor hardware efficiency. To address these issues, this paper proposes a hardware-cooperative LCA optimization framework. Our method introduces three key innovations: (1) a novel WARP-level parallel LCA solver that jointly accelerates sparse coding and dictionary update; (2) block-wise local response constraints and sliding-window sparse regularization to enhance biological plausibility and convergence stability; and (3) an adaptive dynamic thresholding mechanism to mitigate suboptimal solutions caused by hard thresholding. Evaluated on standard benchmarks including BSD500, our approach achieves a 5.3× speedup over conventional LCA, reduces memory footprint to one-quarter, improves reconstruction PSNR by 1.8 dB, and—critically—enables real-time video CSC for the first time.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Improves efficiency of convolutional sparse coding with LCA
Addresses non-convex loss from hard-thresholding in LCA
Enhances solution quality and convergence speed in LCA
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predictor network for LCA state initialization
Combines LCA with predictive priming
Enhances convergence speed and solution quality
🔎 Similar Papers
No similar papers found.
G
Geoffrey Kasenbacher
Mercedes-Benz AG, Böblingen, Germany
F
Felix Ehret
Mercedes-Benz AG, Böblingen, Germany
G
Gerrit Ecke
Mercedes-Benz AG, Böblingen, Germany
Sebastian Otte
Sebastian Otte
Institute for Robotics and Cognitive Systems
Artificial IntelligenceMachine LearningNeural Networks