🤖 AI Summary
Addressing the practical challenges of scarce fine-grained temporal annotations and severe background noise interference in infant cry detection, this paper introduces CrySeg—its first professionally annotated dataset specifically designed for temporal cry segmentation. We further propose CRSTC (Causal Representation Learning for Sound Temporal Clustering), an unsupervised method driven by causal temporal representation learning. CRSTC innovatively integrates causal discovery for modeling time-series dynamics, temporal contrastive learning for discriminative feature extraction, and sparse transition graph clustering for event boundary inference—enabling precise cry event segmentation without any manual labels. Evaluated under realistic noisy conditions, CRSTC achieves performance on par with state-of-the-art supervised methods, significantly improving segmentation accuracy and robustness. This work provides a deployable, annotation-free technical foundation for intelligent infant monitoring systems.
📝 Abstract
This paper addresses a major challenge in acoustic event detection, in particular infant cry detection in the presence of other sounds and background noises: the lack of precise annotated data. We present two contributions for supervised and unsupervised infant cry detection. The first is an annotated dataset for cry segmentation, which enables supervised models to achieve state-of-the-art performance. Additionally, we propose a novel unsupervised method, Causal Representation Spare Transition Clustering (CRSTC), based on causal temporal representation, which helps address the issue of data scarcity more generally. By integrating the detected cry segments, we significantly improve the performance of downstream infant cry classification, highlighting the potential of this approach for infant care applications.