🤖 AI Summary
This work addresses the emerging threat of environmental audio deepfakes—such as synthetic alarms or gunshots—which can be readily exploited to generate deceptive content and compromise public safety, yet remain underexplored in detection research. To advance this nascent field, we launch the first Environmental Audio Deepfake Detection Challenge, establishing a standardized dataset and a multidimensional evaluation protocol. The challenge attracted 97 participating teams, yielding 1,748 submitted solutions. Through systematic analysis of high-performing models, we identify common architectural patterns and training strategies, thereby establishing the first benchmark for environmental audio deepfake detection. This effort delivers an authoritative baseline system and a performance leaderboard, laying a foundational framework for future research in data, methodology, and evaluation.
📝 Abstract
Recent progress in audio generation has made it increasingly easy to create highly realistic environmental soundscapes, which can be misused to produce deceptive content, such as fake alarms, gunshots, and crowd sounds, raising concerns for public safety and trust. While deepfake detection for speech and singing voice has been extensively studied, environmental sound deepfake detection (ESDD) remains underexplored. To advance ESDD, the first edition of the ESDD challenge was launched, attracting 97 registered teams and receiving 1,748 valid submissions. This paper presents the task formulation, dataset construction, evaluation protocols, baseline systems, and key insights from the challenge results. Furthermore, we analyze common architectural choices and training strategies among top-performing systems. Finally, we discuss potential future research directions for ESDD, outlining key opportunities and open problems to guide subsequent studies in this field.