🤖 AI Summary
Existing audio representations lack systematic evaluation of compositional structure, making it difficult to assess whether they can model sound scenes in terms of sources and their attributes. This work introduces the first compositional evaluation framework for audio by adapting paradigms from vision and language research, proposing a benchmark based on controllable synthetic data. The benchmark comprises two core tasks: A-COAT (Assessing Consistency under Additive Transformations) and A-TRE (Attribute-based Reconstruction of Environments). Leveraging a large-scale, controllable synthetic dataset, this study establishes a reproducible and scalable foundation for evaluating the compositionality of audio embeddings, enabling rigorous analysis of how well learned representations capture the structured, generative nature of real-world auditory scenes.
📝 Abstract
We propose a benchmark for evaluating compositionality in audio representations. Audio compositionality refers to representing sound scenes in terms of constituent sources and attributes, and combining them systematically. While central to auditory perception, this property is largely absent from current evaluation protocols. Our framework adapts ideas from vision and language to audio through two tasks: A-COAT, which tests consistency under additive transformations, and A-TRE, which probes reconstructibility from attribute-level primitives. Both tasks are supported by large synthetic datasets with controlled variation in acoustic attributes, providing the first benchmark of compositional structure in audio embeddings.