SAR-LM: Symbolic Audio Reasoning with Large Language Models

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio understanding methods predominantly rely on opaque, dense embeddings, limiting their performance on structured reasoning tasks and hindering error attribution when failures occur. This paper proposes an end-to-end symbolic audio reasoning framework that replaces black-box embeddings with three interpretable, structured textual representations derived directly from raw audio: automatic speech recognition (ASR) transcripts, sound event labels, and musical chord sequences. By feeding these human-readable, semantically grounded features into large language models, the framework enables transparent reasoning pathways and traceable error localization. Our approach significantly enhances interpretability without sacrificing performance—achieving competitive results on three major multimodal benchmarks: MMAU, MMAR, and OmniBench. To our knowledge, this is the first work to systematically ensure semantic readability and analytical controllability of the audio reasoning process while maintaining high accuracy.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have advanced in text and vision, but their reasoning on audio remains limited. Most existing methods rely on dense audio embeddings, which are difficult to interpret and often fail on structured reasoning tasks. Caption-based approaches, introduced in recent benchmarks such as MMAU, improve performance by translating audio into text, yet still depend on dense embeddings as input, offering little insight when models fail. We present SAR-LM, a symbolic audio reasoning pipeline that builds on this caption-based paradigm by converting audio into structured, human-readable features across speech, sound events, and music. These symbolic inputs support both reasoning and transparent error analysis, enabling us to trace failures to specific features. Across three benchmarks, MMAU, MMAR, and OmniBench, SAR-LM achieves competitive results, while prioritizing interpretability as its primary contribution.
Problem

Research questions and friction points this paper is trying to address.

Enhancing audio reasoning with symbolic features instead of dense embeddings
Improving interpretability by converting audio to structured human-readable data
Enabling transparent error analysis across speech, sound events, and music
Innovation

Methods, ideas, or system contributions that make the work stand out.

Symbolic audio reasoning pipeline converts audio
Structured human-readable features enable transparent analysis
Competitive performance while prioritizing interpretability
🔎 Similar Papers
No similar papers found.
T
Termeh Taheri
School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
Y
Yi Ma
School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
Emmanouil Benetos
Emmanouil Benetos
Queen Mary University of London
Machine listeningAudio signal processingMusic information retrievalMachine learning