๐ค AI Summary
Existing large audio language models struggle with tasks requiring precise temporal localization, such as word alignment and speaker diarization. Conventional approaches that generate timestamps via text tokens suffer from high computational overhead, hallucination, and poor generalization to out-of-distribution long-form audio. This work proposes a frame-level internal tool-use mechanism that enables the model to perform text-token-free temporal localization directly from internal audio representations. The approach jointly trains a lightweight binary frame classifier with a novel inhomogeneous Poisson process loss. It significantly outperforms baseline methods on word localization, speaker diarization, and event localization tasks, achieves over 50ร faster inference, and maintains high accuracy on out-of-distribution long audioโscenarios where traditional methods completely fail.
๐ Abstract
Large audio language models are increasingly used for complex audio understanding tasks, but they struggle with temporal tasks that require precise temporal grounding, such as word alignment and speaker diarization. The standard approach, where we generate timestamps as sequences of text tokens, is computationally expensive and prone to hallucination, especially when processing audio lengths outside the model's training distribution. In this work, we propose frame-level internal tool use, a method that trains audio LMs to use their own internal audio representations to perform temporal grounding directly. We introduce a lightweight prediction mechanism trained via two objectives: a binary frame classifier and a novel inhomogeneous Poisson process (IHP) loss that models temporal event intensity. Across word localization, speaker diarization, and event localization tasks, our approach outperforms token-based baselines. Most notably, it achieves a>50x inference speedup and demonstrates robust length generalization, maintaining high accuracy on out-of-distribution audio durations where standard token-based models collapse completely.