🤖 AI Summary
Unstructured discourse in cultural heritage (CH) texts poses significant challenges for transforming contested knowledge—particularly authenticity debates—into queryable, structured knowledge graphs (KGs).
Method: This paper introduces ATR4CH, the first systematic methodology integrating large language models (LLMs)—including Claude Sonnet 3.7, Llama 3.3 70B, and GPT-4o-mini—with CH-specific ontologies via a five-stage pipeline: foundational analysis, annotation schema design, architecture implementation, integration optimization, and comprehensive evaluation.
Contribution/Results: Evaluated on Wikipedia texts concerning contested cultural artifacts, ATR4CH achieves metadata F1-scores of 0.96–0.99 and evidence extraction F1-scores of 0.95–0.97. Notably, smaller LLMs deliver high performance with superior cost-efficiency. The framework enables cross-domain, multi-source KG construction while significantly enhancing both the retrievability and interpretability of CH knowledge.
📝 Abstract
Cultural Heritage texts contain rich knowledge that is difficult to query systematically due to the challenges of converting unstructured discourse into structured Knowledge Graphs (KGs). This paper introduces ATR4CH (Adaptive Text-to-RDF for Cultural Heritage), a systematic five-step methodology for Large Language Model-based Knowledge Extraction from Cultural Heritage documents. We validate the methodology through a case study on authenticity assessment debates. Methodology - ATR4CH combines annotation models, ontological frameworks, and LLM-based extraction through iterative development: foundational analysis, annotation schema development, pipeline architecture, integration refinement, and comprehensive evaluation. We demonstrate the approach using Wikipedia articles about disputed items (documents, artifacts...), implementing a sequential pipeline with three LLMs (Claude Sonnet 3.7, Llama 3.3 70B, GPT-4o-mini). Findings - The methodology successfully extracts complex Cultural Heritage knowledge: 0.96-0.99 F1 for metadata extraction, 0.7-0.8 F1 for entity recognition, 0.65-0.75 F1 for hypothesis extraction, 0.95-0.97 for evidence extraction, and 0.62 G-EVAL for discourse representation. Smaller models performed competitively, enabling cost-effective deployment. Originality - This is the first systematic methodology for coordinating LLM-based extraction with Cultural Heritage ontologies. ATR4CH provides a replicable framework adaptable across CH domains and institutional resources. Research Limitations - The produced KG is limited to Wikipedia articles. While the results are encouraging, human oversight is necessary during post-processing. Practical Implications - ATR4CH enables Cultural Heritage institutions to systematically convert textual knowledge into queryable KGs, supporting automated metadata enrichment and knowledge discovery.