Slumbering to Precision: Enhancing Artificial Neural Network Calibration Through Sleep-like Processes

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the common issue of overconfidence in artificial neural networks, where predicted probabilities often misalign with actual accuracy, undermining model reliability. Inspired by the spontaneous replay mechanisms observed during biological sleep, the authors propose Sleep Replay Consolidation (SRC)—a post-training, label-free calibration method that selectively replays internal representations and fine-tunes model weights without requiring supervised retraining. SRC introduces, for the first time, a sleep-like replay mechanism into the domain of model calibration and is designed to complement existing approaches such as temperature scaling. Experiments on AlexNet and VGG19 demonstrate that combining SRC with temperature scaling significantly improves the trade-off between Brier score and predictive entropy, thereby enhancing the reliability of confidence estimates.

Technology Category

Application Category

📝 Abstract
Artificial neural networks are often overconfident, undermining trust because their predicted probabilities do not match actual accuracy. Inspired by biological sleep and the role of spontaneous replay in memory and learning, we introduce Sleep Replay Consolidation (SRC), a novel calibration approach. SRC is a post-training, sleep-like phase that selectively replays internal representations to update network weights and improve calibration without supervised retraining. Across multiple experiments, SRC is competitive with and complementary to standard approaches such as temperature scaling. Combining SRC with temperature scaling achieves the best Brier score and entropy trade-offs for AlexNet and VGG19. These results show that SRC provides a fundamentally novel approach to improving neural network calibration. SRC-based calibration offers a practical path toward more trustworthy confidence estimates and narrows the gap between human-like uncertainty handling and modern deep networks.
Problem

Research questions and friction points this paper is trying to address.

neural network calibration
overconfidence
predicted probability
trustworthiness
uncertainty estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sleep Replay Consolidation
neural network calibration
confidence estimation
post-training calibration
spontaneous replay
🔎 Similar Papers
J
Jean Erik Delanois
Department of Computer Science & Engineering, University of California, San Diego, La Jolla, California, USA
A
Aditya Ahuja
Department of Computer Science & Engineering, University of California, San Diego, La Jolla, California, USA
G
Giri P. Krishnan
ARTISAN, Georgia Institute of Technology, Atlanta, Georgia, USA
Maxim Bazhenov
Maxim Bazhenov
Professor of Medicine
computational neuroscienceartificial intelligenceolfactory codingepileptogenesissleep and memory consolidation