🤖 AI Summary
This work addresses the challenge of aligning multimodal biological data—specifically visual, textual, and acoustic modalities—with a focus on the underexplored integration of audio for species identification. To this end, the authors construct a large-scale multimodal biodiversity dataset comprising 1.3 million audio recordings and 2.3 million images. Building upon BioCLIP2, they propose a novel two-stage training framework that, for the first time, enables a unified vision–language–audio representation in the biological domain. The resulting model supports fine-grained cross-modal semantic understanding of species and establishes an omnidirectional retrieval benchmark spanning taxonomic ranks at the family, genus, and species levels. Experimental results demonstrate that the model effectively captures species-level semantic information within a shared embedding space, significantly advancing multimodal understanding of biodiversity.
📝 Abstract
Understanding animal species from multimodal data poses an emerging challenge at the intersection of computer vision and ecology. While recent biological models, such as BioCLIP, have demonstrated strong alignment between images and textual taxonomic information for species identification, the integration of the audio modality remains an open problem. We propose BioVITA, a novel visual-textual-acoustic alignment framework for biological applications. BioVITA involves (i) a training dataset, (ii) a representation model, and (iii) a retrieval benchmark. First, we construct a large-scale training dataset comprising 1.3 million audio clips and 2.3 million images, covering 14,133 species annotated with 34 ecological trait labels. Second, building upon BioCLIP2, we introduce a two-stage training framework to effectively align audio representations with visual and textual representations. Third, we develop a cross-modal retrieval benchmark that covers all possible directional retrieval across the three modalities (i.e., image-to-audio, audio-to-text, text-to-image, and their reverse directions), with three taxonomic levels: Family, Genus, and Species. Extensive experiments demonstrate that our model learns a unified representation space that captures species-level semantics beyond taxonomy, advancing multimodal biodiversity understanding. The project page is available at: https://dahlian00.github.io/BioVITA_Page/