🤖 AI Summary
With the growing digitization of meetings, real-time summarization remains an unaddressed challenge; this work is the first to systematically investigate online meeting summarization, identifying its unique requirements—including latency sensitivity, streaming input processing, and evaluation of partially generated summaries. Method: We propose a dynamic-triggered adaptive scheduling strategy and introduce a novel evaluation framework that jointly optimizes response latency and intermediate summary quality, incorporating a latency-sensitive ROUGE variant and a partial-summary quality metric. Contribution/Results: Evaluated on the AutoMin dataset, our approach significantly outperforms fixed-scheduling baselines, enables fine-grained quality–latency trade-off analysis, and establishes the first reproducible, rigorously evaluable research framework for online summarization.
📝 Abstract
With more and more meetings moving to a digital domain, meeting summarization has recently gained interest in both academic and commercial research. However, prior academic research focuses on meeting summarization as an offline task, performed after the meeting concludes. In this paper, we perform the first systematic study of online meeting summarization. For this purpose, we propose several policies for conducting online summarization. We discuss the unique challenges of this task compared to the offline setting and define novel metrics to evaluate latency and partial summary quality. The experiments on the AutoMin dataset show that 1) online models can produce strong summaries, 2) our metrics allow a detailed analysis of different systems' quality-latency trade-off, also taking into account intermediate outputs and 3) adaptive policies perform better than fixed scheduled ones. These findings provide a starting point for the wider research community to explore this important task.