ChronosAudio: A Comprehensive Long-Audio Benchmark for Evaluating Audio-Large Language Models

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of long-form audio understanding in current audio large language models, as existing benchmarks are largely confined to short audio segments. To bridge this gap, we introduce ChronosAudio—the first multitask benchmark specifically designed for long audio comprehension, comprising six diverse tasks and 36,000 annotated samples spanning over 200 hours, stratified by short, medium, and long durations. Leveraging large-scale human annotations and real-world audio data, we conduct a comprehensive evaluation of 16 state-of-the-art models through multitask performance assessment and attention analysis. Our experiments reveal a severe performance degradation—exceeding 90%—on long audio inputs, accompanied by significant attention dispersion. Moreover, current optimization strategies recover only about half of the lost performance, underscoring the critical challenges and urgent need for dedicated research in long-form audio understanding.

Technology Category

Application Category

📝 Abstract
Although Audio Large Language Models (ALLMs) have witnessed substantial advancements, their long audio understanding capabilities remain unexplored. A plethora of benchmarks have been proposed for general audio tasks, they predominantly focus on short-form clips, leaving without a consensus on evaluating ALLMs over extended durations. This paper proposes ChronosAudio, the first multi-task benchmark tailored for long-audio understanding in ALLMs. It encompasses six major task categories and comprises 36,000 test instances totaling over 200 hours audio, stratified into short, middle, and long-form categories to comprehensively evaluate length generalization. Extensive experiments on 16 state-of-the-art models using ChronosAudio yield three critical findings: 1.Precipitous Long-Context Collapse: ALLMs exhibit a severe inability to sustain performance, with the transition from short to long contexts triggering a staggering performance degradation of over 90% in specific tasks. 2.Structural Attention Dilution: Performance degradation stems from a fundamental failure in maintaining temporal locality; attention mechanisms suffer from significant diffusion in later sequences. 3.Restorative Ceiling of Mitigation: Current strategies only offer 50% recovery. These findings reveal significant challenges in long-audio, underscoring the urgent need for approaches to achieve robust, document-level audio reasoning.
Problem

Research questions and friction points this paper is trying to address.

long-audio understanding
Audio Large Language Models
benchmark
length generalization
long-context collapse
Innovation

Methods, ideas, or system contributions that make the work stand out.

long-audio understanding
audio large language models
benchmark
attention dilution
length generalization
🔎 Similar Papers