🤖 AI Summary
This work addresses the challenge of efficient deep neural network (DNN) deployment on edge devices. We propose a novel model compression paradigm grounded in eXplainable Artificial Intelligence (XAI), diverging from conventional heuristic pruning or uniform quantization. For the first time, gradient-based XAI methods—specifically Layer-wise Relevance Propagation (LRP)—are leveraged to perform **per-weight importance modeling**, guiding importance-aware structured pruning and mixed-precision quantization. The approach preserves discriminative capability while enabling precise, interpretable compression. On multiple benchmark models and datasets, it achieves an average 64% reduction in model size and improves accuracy by 42% over existing XAI-driven compression methods. Our core contribution lies in establishing a differentiable, interpretable mapping between XAI explanations and compression decisions—thereby advancing trustworthy and efficient DNN compression.
📝 Abstract
Deep neural networks (DNNs) have demonstrated remarkable performance in many tasks but it often comes at a high computational cost and memory usage. Compression techniques, such as pruning and quantization, are applied to reduce the memory footprint of DNNs and make it possible to accommodate them on resource-constrained edge devices. Recently, explainable artificial intelligence (XAI) methods have been introduced with the purpose of understanding and explaining AI methods. XAI can be utilized to get to know the inner functioning of DNNs, such as the importance of different neurons and features in the overall performance of DNNs. In this paper, a novel DNN compression approach using XAI is proposed to efficiently reduce the DNN model size with negligible accuracy loss. In the proposed approach, the importance score of DNN parameters (i.e. weights) are computed using a gradient-based XAI technique called Layer-wise Relevance Propagation (LRP). Then, the scores are used to compress the DNN as follows: 1) the parameters with the negative or zero importance scores are pruned and removed from the model, 2) mixed-precision quantization is applied to quantize the weights with higher/lower score with higher/lower number of bits. The experimental results show that, the proposed compression approach reduces the model size by 64% while the accuracy is improved by 42% compared to the state-of-the-art XAI-based compression method.