MAD-UV: The 1st INTERSPEECH Mice Autism Detection via Ultrasound Vocalization Challenge

📅 2025-01-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current mouse models of autism spectrum disorder (ASD) lack objective, quantitative tools for vocal phenotype identification. Method: We introduce the first benchmark dataset for automated ASD classification based on ultrasonic vocalizations (USVs) in mice and pioneer the application of speech signal processing techniques to neurodevelopmental disease animal models. We propose a multi-band analytical framework integrating both ultrasonic and audible-frequency components, and design a convolutional neural network (CNN) classifier that fuses three types of time-frequency spectrogram features—including those from the audible band—to support both segment-level and individual-level classification. Results: Audible-band features yield the highest discriminative performance, achieving unweighted average recall (UAR) of 0.600 (segment-level) and 0.625 (individual-level). These results validate the feasibility of automated ASD detection via vocal phenotypes and establish a novel interdisciplinary methodology and benchmark platform for biomarker discovery in translational neuroscience.

Technology Category

Application Category

📝 Abstract
The Mice Autism Detection via Ultrasound Vocalization (MAD-UV) Challenge introduces the first INTERSPEECH challenge focused on detecting autism spectrum disorder (ASD) in mice through their vocalizations. Participants are tasked with developing models to automatically classify mice as either wild-type or ASD models based on recordings with a high sampling rate. Our baseline system employs a simple CNN-based classification using three different spectrogram features. Results demonstrate the feasibility of automated ASD detection, with the considered audible-range features achieving the best performance (UAR of 0.600 for segment-level and 0.625 for subject-level classification). This challenge bridges speech technology and biomedical research, offering opportunities to advance our understanding of ASD models through machine learning approaches. The findings suggest promising directions for vocalization analysis and highlight the potential value of audible and ultrasound vocalizations in ASD detection.
Problem

Research questions and friction points this paper is trying to address.

Autism Detection
Ultrasound Vocalizations
Mouse Model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ultrasound Vocalization Recognition
Autism Detection
Convolutional Neural Network (CNN)
🔎 Similar Papers
No similar papers found.
Z
Zijiang Yang
The University of Tokyo, Japan; University of Augsburg, Germany
Meishu Song
Meishu Song
The University of Tokyo, Japan
Xin Jing
Xin Jing
Technische Universität München
deep learningspeech processingtext-to-speech
H
Haojie Zhang
Beijing Institute of Technology, China
K
Kun Qian
Beijing Institute of Technology, China
B
Bin Hu
Beijing Institute of Technology, China
K
Kota Tamada
Kobe University, Japan
Toru Takumi
Toru Takumi
Kobe University, Japan
B
Björn W. Schuller
CHI, TUM, Germany; GLAM, ICL, UK
Yoshiharu Yamamoto
Yoshiharu Yamamoto
The University of Tokyo, Japan