Training Data Attribution for Image Generation using Ontology-Aligned Knowledge Graphs

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative models face severe challenges in transparency and copyright traceability, primarily due to the difficulty of establishing fine-grained provenance links between generated content and original training data. Method: This paper proposes an ontology-aligned knowledge graph construction method leveraging multimodal large language models (MLLMs). It first parses semantic content from generated images and extracts structured subject–predicate–object triples; then performs cross-modal and cross-source ontology alignment to unify heterogeneous knowledge representations; finally enables interpretable, sample-level溯源 from outputs back to training instances. Contribution/Results: The method is validated on both local and large-scale models, significantly improving copyright attribution accuracy and dataset transparency. It provides a scalable, technically grounded foundation for responsible governance of generative AI and human-AI collaboration, advancing traceability beyond black-box generation.

Technology Category

Application Category

📝 Abstract
As generative models become powerful, concerns around transparency, accountability, and copyright violations have intensified. Understanding how specific training data contributes to a model's output is critical. We introduce a framework for interpreting generative outputs through the automatic construction of ontologyaligned knowledge graphs (KGs). While automatic KG construction from natural text has advanced, extracting structured and ontology-consistent representations from visual content remains challenging -- due to the richness and multi-object nature of images. Leveraging multimodal large language models (LLMs), our method extracts structured triples from images, aligned with a domain-specific ontology. By comparing the KGs of generated and training images, we can trace potential influences, enabling copyright analysis, dataset transparency, and interpretable AI. We validate our method through experiments on locally trained models via unlearning, and on large-scale models through a style-specific experiment. Our framework supports the development of AI systems that foster human collaboration, creativity and stimulate curiosity.
Problem

Research questions and friction points this paper is trying to address.

Attributing training data influence on generated images
Extracting structured knowledge from visual content using LLMs
Enhancing transparency and copyright analysis in generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses ontology-aligned knowledge graphs for attribution
Extracts structured triples from images via multimodal LLMs
Compares knowledge graphs of generated and training images
🔎 Similar Papers
No similar papers found.
T
Theodoros Aivalis
National Centre for Scientific Research “Demokritos”, Greece and University of Glasgow, UK
I
I. Klampanos
University of Glasgow, UK
Antonis Troumpoukis
Antonis Troumpoukis
NCSR "Demokritos"
J
J. Jose
University of Glasgow, UK