🤖 AI Summary
Current deep neural network training lacks real-time, interpretable monitoring of hidden-state dynamics, impeding timely diagnosis of data anomalies, task transfer failures, or catastrophic forgetting. To address this, we propose SentryCam—the first online, lightweight, end-to-end visualization tool for neural representation evolution. It integrates training hooks, streaming feature extraction, real-time t-SNE/UMAP dimensionality reduction, and WebGL-based interactive rendering to enable fine-grained dynamic analysis in continual learning settings. Experiments demonstrate that SentryCam detects model abnormalities on average 23% earlier in training steps than conventional loss/accuracy monitoring, substantially improving debugging efficiency. Moreover, it is the first method to identify and localize representation collapse—a critical failure mode—in continual learning. By enabling continuous, interpretable observation of internal representation dynamics, SentryCam establishes a novel paradigm for understanding and diagnosing deep learning model behavior during training.
📝 Abstract
Monitoring the training of neural networks is essential for identifying potential data anomalies, enabling timely interventions and conserving significant computational resources. Apart from the commonly used metrics such as losses and validation accuracies, the hidden representation could give more insight into the model progression. To this end, we introduce SentryCam, an automated, real-time visualization tool that reveals the progression of hidden representations during training. Our results show that this visualization offers a more comprehensive view of the learning dynamics compared to basic metrics such as loss and accuracy over various datasets. Furthermore, we show that SentryCam could facilitate detailed analysis such as task transfer and catastrophic forgetting to a continual learning setting. The code is available at https://github.com/xianglinyang/SentryCam.