🤖 AI Summary
Existing deep audio fingerprinting methods rely on fixed-length inputs, which struggle to capture temporal dynamics and thus limit robustness in real-world scenarios. This work proposes the first end-to-end trainable variable-length audio fingerprinting model that unifies training and inference for audio segments of arbitrary duration through deep variable-length encoding, temporal feature aggregation, and similarity matching. By eliminating the conventional constraint of fixed segmentation, the proposed approach substantially enhances adaptability to temporal variations. It consistently outperforms state-of-the-art methods across three real-world datasets on both live audio identification and audio retrieval tasks.
📝 Abstract
Audio fingerprinting converts audio to much lower-dimensional representations, allowing distorted recordings to still be recognized as their originals through similar fingerprints. Existing deep learning approaches rigidly fingerprint fixed-length audio segments, thereby neglecting temporal dynamics during segmentation. To address limitations due to this rigidity, we propose Variable-Length Audio FingerPrinting (VLAFP), a novel method that supports variable-length fingerprinting. To the best of our knowledge, VLAFP is the first deep audio fingerprinting model capable of processing audio of variable length, for both training and testing. Our experiments show that VLAFP outperforms existing state-of-the-arts in live audio identification and audio retrieval across three real-world datasets.