ARC-Chapter: Structuring Hour-Long Videos into Navigable Chapters and Hierarchical Summaries

πŸ“… 2025-11-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing methods exhibit limited generalization for fine-grained chapter segmentation and hierarchical summarization of hour-long videos (e.g., lectures, documentaries) due to reliance on small-scale, coarse-grained annotations. To address this, we propose the first large-scale video chaptering framework: (1) constructing a bilingual, million-scale long-video chapter dataset with temporally aligned, hierarchical chapter annotations; (2) developing a multimodal annotation pipeline integrating ASR transcripts, scene text, and visual captions; and (3) introducing a semantic-aligned sequence modeling approach alongside GRACEβ€”a novel evaluation metric supporting many-to-one alignment and semantic similarity computation. Our method achieves +14.0% F1 and +11.3% SODA improvements over prior work on standard benchmarks, and demonstrates strong transferability to downstream tasks, establishing a new state-of-the-art for long-video structural understanding.

Technology Category

Application Category

πŸ“ Abstract
The proliferation of hour-long videos (e.g., lectures, podcasts, documentaries) has intensified demand for efficient content structuring. However, existing approaches are constrained by small-scale training with annotations that are typical short and coarse, restricting generalization to nuanced transitions in long videos. We introduce ARC-Chapter, the first large-scale video chaptering model trained on over million-level long video chapters, featuring bilingual, temporally grounded, and hierarchical chapter annotations. To achieve this goal, we curated a bilingual English-Chinese chapter dataset via a structured pipeline that unifies ASR transcripts, scene texts, visual captions into multi-level annotations, from short title to long summaries. We demonstrate clear performance improvements with data scaling, both in data volume and label intensity. Moreover, we design a new evaluation metric termed GRACE, which incorporates many-to-one segment overlaps and semantic similarity, better reflecting real-world chaptering flexibility. Extensive experiments demonstrate that ARC-Chapter establishes a new state-of-the-art by a significant margin, outperforming the previous best by 14.0% in F1 score and 11.3% in SODA score. Moreover, ARC-Chapter shows excellent transferability, improving the state-of-the-art on downstream tasks like dense video captioning on YouCook2.
Problem

Research questions and friction points this paper is trying to address.

Developing a model to structure hour-long videos into navigable chapters
Addressing limitations of small-scale training with coarse annotations
Creating hierarchical summaries from multi-modal video content sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale video chaptering model with million-level training
Multi-level annotations from transcripts, texts, and captions
New GRACE metric for flexible chapter evaluation
πŸ”Ž Similar Papers
No similar papers found.
J
Junfu Pu
ARC Lab, Tencent PCG
T
Teng Wang
ARC Lab, Tencent PCG
Y
Yixiao Ge
ARC Lab, Tencent PCG
Yuying Ge
Yuying Ge
Tencent ARC Lab
deep learningcomputer vision
C
Chen Li
ARC Lab, Tencent PCG
Ying Shan
Ying Shan
Distinguished Scientist at Tencent, Director of ARC Lab & AI Lab CVC
Deep learningcomputer visionmachine learningpaid searchdisplay ads