Pruning-aware Loss Functions for STOI-Optimized Pruned Recurrent Autoencoders for the Compression of the Stimulation Patterns of Cochlear Implants at Zero Delay

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Wireless streaming audio compression for cochlear implants faces a fundamental trade-off among zero latency, low power consumption, and high speech intelligibility. Method: This paper proposes a pruning-aware recurrent autoencoder framework using LSTM/GRU-based structured autoencoders, an STOI-weighted loss function, and explicit pruning constraints integrated into training to jointly optimize sparsity and speech quality. It further introduces pruning-aware backpropagation and post-training quantization. Contribution/Results: To the best of our knowledge, this is the first end-to-end sparse training paradigm explicitly guided by STOI. Experiments demonstrate near-lossless STOI preservation at 55% pruning ratio; compared to magnitude-guided pruning baselines, it achieves statistically significant STOI improvement beyond 45% pruning while drastically reducing model size—meeting the stringent real-time and energy-efficiency requirements of hearing-assistive devices.

Technology Category

Application Category

📝 Abstract
Cochlear implants (CIs) are surgically implanted hearing devices, which allow to restore a sense of hearing in people suffering from profound hearing loss. Wireless streaming of audio from external devices to CI signal processors has become common place. Specialized compression based on the stimulation patterns of a CI by deep recurrent autoencoders can decrease the power consumption in such a wireless streaming application through bit-rate reduction at zero latency. While previous research achieved considerable bit-rate reductions, model sizes were ignored, which can be of crucial importance in hearing-aids due to their limited computational resources. This work investigates maximizing objective speech intelligibility of the coded stimulation patterns of deep recurrent autoencoders while minimizing model size. For this purpose, a pruning-aware loss is proposed, which captures the impact of pruning during training. This training with a pruning-aware loss is compared to conventional magnitude-informed pruning and is found to yield considerable improvements in objective intelligibility, especially at higher pruning rates. After fine-tuning, little to no degradation of objective intelligibility is observed up to a pruning rate of about 55,%. The proposed pruning-aware loss yields substantial gains in objective speech intelligibility scores after pruning compared to the magnitude-informed baseline for pruning rates above 45,%.
Problem

Research questions and friction points this paper is trying to address.

Optimize speech intelligibility in cochlear implants
Minimize model size for computational efficiency
Implement pruning-aware loss for improved performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pruning-aware loss functions
Zero delay compression
Deep recurrent autoencoders
🔎 Similar Papers
No similar papers found.