π€ AI Summary
This work addresses the opacity and verifiability challenges of large audio language models in audio question answering by proposing an ensemble reasoning framework that integrates multi-source evidence. The approach combines independent observations from two large audio language models and incorporates a textual reasoning model to cross-validate outputs from 25 acoustic analysis tools, thereby constructing dense reasoning chains where each step is explicitly grounded in external evidence and annotated with reliability labels. This study achieves the first deep integration of multi-level acoustic evidence with multi-model observations, substantially enhancing factual accuracy, logical rigor, and traceability of the reasoning process. The system secured first place in the Agent track of the Interspeech 2026 Audio Reasoning Challenge, significantly outperforming all competing approaches in reasoning quality metrics.
π Abstract
Large audio language models (LALMs) can answer questions about speech, music, and environmental sounds, yet their internal reasoning is largely opaque and difficult to validate. We describe TalTech's solution to the Agent Track of the Interspeech 2026 Audio Reasoning Challenge, in which systems are evaluated on reasoning process quality, specifically the factual accuracy, logical soundness, and completeness of their reasoning chains. Our multi-source ensemble pipeline uses two LALMs that generate independent observations, while a separate text-only reasoning model cross-checks these against outputs from 25 acoustic tools organized into reliability tiers. By grounding every inference step in explicit, reliability-tagged evidence, the system produces dense, verifiable reasoning chains. Our system ranked first in the challenge, outperforming all competing systems by a wide margin in challenge's reasoning quality metric.