🤖 AI Summary
Existing automatic topic segmentation methods for multi-topic spoken content, such as videos and podcasts, often fail to effectively leverage acoustic features, resulting in limited performance and high sensitivity to ASR errors. This work proposes a multimodal end-to-end approach that jointly fine-tunes a text encoder with a siamese audio encoder to explicitly model cross-modal acoustic cues around sentence boundaries. For the first time, inter-sentential acoustic features are systematically incorporated to enhance the robustness of topic segmentation. Evaluated on a large-scale YouTube video dataset, the proposed method significantly outperforms both text-only and existing multimodal baselines. Moreover, it surpasses substantially larger text-only models across English, German, and Portuguese, demonstrating strong multilingual generalization and noise resilience.
📝 Abstract
Spoken content, such as online videos and podcasts, often spans multiple topics, which makes automatic topic segmentation essential for user navigation and downstream applications. However, current methods do not fully leverage acoustic features, leaving room for improvement. We propose a multi-modal approach that fine-tunes both a text encoder and a Siamese audio encoder, capturing acoustic cues around sentence boundaries. Experiments on a large-scale dataset of YouTube videos show substantial gains over text-only and multi-modal baselines. Our model also proves more resilient to ASR noise and outperforms a larger text-only baseline on three additional datasets in Portuguese, German, and English, underscoring the value of learned acoustic features for robust topic segmentation.