🤖 AI Summary
Long-standing issues in audio–lyrics alignment—including inconsistent benchmarks and non-reproducible experiments—have hindered progress. This paper introduces WEALY, an end-to-end, fully reproducible framework that leverages the Whisper decoder to extract acoustic–textual joint embeddings, integrated via contrastive learning and multimodal feature fusion. Through systematic ablation studies, we rigorously analyze the impact of linguistic robustness, loss function selection, and embedding strategies, thereby establishing a transparent and reliable evaluation benchmark. On standard datasets (e.g., MUSIC21, DALI), WEALY achieves performance competitive with state-of-the-art methods while substantially improving reproducibility and interpretability. The framework is open-sourced and provides a standardized, modular baseline for music information retrieval tasks, facilitating fair comparison and community-driven advancement.
📝 Abstract
Audio-based lyrics matching can be an appealing alternative to other content-based retrieval approaches, but existing methods often suffer from limited reproducibility and inconsistent baselines. In this work, we introduce WEALY, a fully reproducible pipeline that leverages Whisper decoder embeddings for lyrics matching tasks. WEALY establishes robust and transparent baselines, while also exploring multimodal extensions that integrate textual and acoustic features. Through extensive experiments on standard datasets, we demonstrate that WEALY achieves a performance comparable to state-of-the-art methods that lack reproducibility. In addition, we provide ablation studies and analyses on language robustness, loss functions, and embedding strategies. This work contributes a reliable benchmark for future research, and underscores the potential of speech technologies for music information retrieval tasks.