Leveraging Knowledge Graphs and LLMs for Structured Generation of Misinformation

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the structural disinformation threat amplified by generative AI, proposing a knowledge graph (KG)-driven framework for controllable misinformation generation. Methodologically, it is the first to leverage KG topological features—such as entity distance and predicate distribution—to model deceptive relational patterns, integrating structured prompt engineering with conditional large language model (LLM) generation to produce highly evasive false triples without fine-tuning—purely via prompting. Key contributions are: (1) demonstrating that inherent structural biases in KGs can be systematically exploited to induce semantic deception; and (2) empirically showing that generated false statements achieve human detection rates below 32% and evade state-of-the-art LLM-based detectors—with accuracy under 58%—thereby exposing a fundamental limitation of current detection paradigms. The work establishes a novel evaluation benchmark and provides mechanistic insights for disinformation defense.

Technology Category

Application Category

📝 Abstract
The rapid spread of misinformation, further amplified by recent advances in generative AI, poses significant threats to society, impacting public opinion, democratic stability, and national security. Understanding and proactively assessing these threats requires exploring methodologies that enable structured and scalable misinformation generation. In this paper, we propose a novel approach that leverages knowledge graphs (KGs) as structured semantic resources to systematically generate fake triplets. By analyzing the structural properties of KGs, such as the distance between entities and their predicates, we identify plausibly false relationships. These triplets are then used to guide large language models (LLMs) in generating misinformation statements with varying degrees of credibility. By utilizing structured semantic relationships, our deterministic approach produces misinformation inherently challenging for humans to detect, drawing exclusively upon publicly available KGs (e.g., WikiGraphs). Additionally, we investigate the effectiveness of LLMs in distinguishing between genuine and artificially generated misinformation. Our analysis highlights significant limitations in current LLM-based detection methods, underscoring the necessity for enhanced detection strategies and a deeper exploration of inherent biases in generative models.
Problem

Research questions and friction points this paper is trying to address.

Systematically generate misinformation using knowledge graphs and LLMs
Assess LLMs' ability to distinguish real vs artificial misinformation
Explore structural KG properties to create hard-to-detect fake content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging knowledge graphs for structured misinformation generation
Using LLMs to generate varying credibility misinformation
Analyzing KG structural properties to identify false relationships
🔎 Similar Papers
No similar papers found.
S
Sania Nayab
Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
M
Marco Simoni
Institute of Informatics and Telematics, National Research Council of Italy; Sapienza Università di Roma
Giulio Rossolini
Giulio Rossolini
Scuola Superiore Sant'Anna
Trustworthy AISafe and Secure AIComputer VisionLLMs