BiblioPage: A Dataset of Scanned Title Pages for Bibliographic Metadata Extraction

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual digitization of bibliographic metadata from historical and contemporary archival documents is inefficient due to heterogeneous layouts and the absence of domain-specific benchmark datasets. To address this, we introduce BiblioPage—the first publicly available scanned title-page dataset tailored for historical and archival bibliographic analysis. It comprises 2,000 title pages from Czech library collections, each annotated with 16 structured metadata categories via precise bounding boxes. BiblioPage reflects real-world diversity across time periods and typographic conventions, filling a critical gap in bibliographic metadata extraction research. We provide an end-to-end evaluation framework for fair benchmarking. Experimental results show that object detection (YOLO/DETR) combined with OCR achieves mAP 52 and F1 59, while vision-language models (Llama 3.2-Vision, GPT-4o) attain up to F1 67. The dataset and evaluation code are fully open-sourced to advance document understanding and information extraction research.

Technology Category

Application Category

📝 Abstract
Manual digitization of bibliographic metadata is time consuming and labor intensive, especially for historical and real-world archives with highly variable formatting across documents. Despite advances in machine learning, the absence of dedicated datasets for metadata extraction hinders automation. To address this gap, we introduce BiblioPage, a dataset of scanned title pages annotated with structured bibliographic metadata. The dataset consists of approximately 2,000 monograph title pages collected from 14 Czech libraries, spanning a wide range of publication periods, typographic styles, and layout structures. Each title page is annotated with 16 bibliographic attributes, including title, contributors, and publication metadata, along with precise positional information in the form of bounding boxes. To extract structured information from this dataset, we valuated object detection models such as YOLO and DETR combined with transformer-based OCR, achieving a maximum mAP of 52 and an F1 score of 59. Additionally, we assess the performance of various visual large language models, including LlamA 3.2-Vision and GPT-4o, with the best model reaching an F1 score of 67. BiblioPage serves as a real-world benchmark for bibliographic metadata extraction, contributing to document understanding, document question answering, and document information extraction. Dataset and evaluation scripts are availible at: https://github.com/DCGM/biblio-dataset
Problem

Research questions and friction points this paper is trying to address.

Manual bibliographic metadata digitization is slow and labor-intensive
Lack of dedicated datasets hinders metadata extraction automation
BiblioPage provides annotated title pages for benchmarking extraction models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scanned title pages dataset for metadata extraction
Combines object detection models with OCR
Evaluates visual large language models performance
🔎 Similar Papers
No similar papers found.