🤖 AI Summary
Existing audio understanding methods predominantly rely on opaque, dense embeddings, limiting their performance on structured reasoning tasks and hindering error attribution when failures occur. This paper proposes an end-to-end symbolic audio reasoning framework that replaces black-box embeddings with three interpretable, structured textual representations derived directly from raw audio: automatic speech recognition (ASR) transcripts, sound event labels, and musical chord sequences. By feeding these human-readable, semantically grounded features into large language models, the framework enables transparent reasoning pathways and traceable error localization. Our approach significantly enhances interpretability without sacrificing performance—achieving competitive results on three major multimodal benchmarks: MMAU, MMAR, and OmniBench. To our knowledge, this is the first work to systematically ensure semantic readability and analytical controllability of the audio reasoning process while maintaining high accuracy.
📝 Abstract
Large language models (LLMs) have advanced in text and vision, but their reasoning on audio remains limited. Most existing methods rely on dense audio embeddings, which are difficult to interpret and often fail on structured reasoning tasks. Caption-based approaches, introduced in recent benchmarks such as MMAU, improve performance by translating audio into text, yet still depend on dense embeddings as input, offering little insight when models fail. We present SAR-LM, a symbolic audio reasoning pipeline that builds on this caption-based paradigm by converting audio into structured, human-readable features across speech, sound events, and music. These symbolic inputs support both reasoning and transparent error analysis, enabling us to trace failures to specific features. Across three benchmarks, MMAU, MMAR, and OmniBench, SAR-LM achieves competitive results, while prioritizing interpretability as its primary contribution.