๐ค AI Summary
Automatic speech recognition (ASR) transcripts of meetings exhibit high noise levels, strong colloquialism, and long contextual dependencies, posing significant challenges for meeting assistant applications.
Method: We introduce the first long-context language model benchmark explicitly designed for real-world meeting assistance tasks. Built upon the ELITR corpus, it comprises 271 manually curated question-answer pairs with multi-level, controllable word error rate (WER)-induced ASR noise. We further propose a hybrid humanโGPT-4 evaluation framework for rigorous assessment.
Contribution/Results: This work pioneers task-grounded long-context evaluation anchored in practical meeting understanding, introducing systematic ASR noise modeling and a mixed-evaluation paradigm. Experiments across 12 state-of-the-art long-context models reveal substantial generational differences in robustness to ASR noise. GPT-4-based automatic scoring achieves high agreement with human judgments (Spearmanโs ฯ > 0.9), though it remains limited in fine-grained discrimination.
๐ Abstract
Research on Large Language Models (LLMs) has recently witnessed an increasing interest in extending the models' context size to better capture dependencies within long documents. While benchmarks have been proposed to assess long-range abilities, existing efforts primarily considered generic tasks that are not necessarily aligned with real-world applications. In contrast, we propose a new benchmark for long-context LLMs focused on a practical meeting assistant scenario in which the long contexts consist of transcripts obtained by automatic speech recognition, presenting unique challenges for LLMs due to the inherent noisiness and oral nature of such data. Our benchmark, ELITR-Bench, augments the existing ELITR corpus by adding 271 manually crafted questions with their ground-truth answers, as well as noisy versions of meeting transcripts altered to target different Word Error Rate levels. Our experiments with 12 long-context LLMs on ELITR-Bench confirm the progress made across successive generations of both proprietary and open models, and point out their discrepancies in terms of robustness to transcript noise. We also provide a thorough analysis of our GPT-4-based evaluation, including insights from a crowdsourcing study. Our findings indicate that while GPT-4's scores align with human judges, its ability to distinguish beyond three score levels may be limited.