🤖 AI Summary
This study addresses the need for high-fidelity, biologically consistent synthetic sperm whale click sequences (codas) to advance modeling of their social communication mechanisms.
Method: We propose the first Transformer-based generative framework tailored to non-human vocalizations, built upon the music pre-trained model VampNet. Leveraging transfer learning and a novel masked phoneme modeling–autoregressive joint decoding strategy, the model enables cross-modal synthesis from arbitrary audio prompts to codas. It is fine-tuned on 10,000 field-recorded codas spanning two decades.
Contribution/Results: Evaluations show that generated codas significantly outperform baselines in expert perceptual ratings and Fréchet Audio Distance. The model also demonstrates strong representational capacity in rhythm structure identification, social unit attribution, and vowel-analogy classification. This work pioneers deep generative modeling for cetacean acoustic synthesis, establishing a new paradigm for bioacoustic research and conservation.
📝 Abstract
Sperm whales communicate in short sequences of clicks known as codas. We present WhAM (Whale Acoustics Model), the first transformer-based model capable of generating synthetic sperm whale codas from any audio prompt. WhAM is built by finetuning VampNet, a masked acoustic token model pretrained on musical audio, using 10k coda recordings collected over the past two decades. Through iterative masked token prediction, WhAM generates high-fidelity synthetic codas that preserve key acoustic features of the source recordings. We evaluate WhAM's synthetic codas using Fr'echet Audio Distance and through perceptual studies with expert marine biologists. On downstream classification tasks including rhythm, social unit, and vowel classification, WhAM's learned representations achieve strong performance, despite being trained for generation rather than classification. Our code is available at https://github.com/Project-CETI/wham