An Audit and Analysis of LLM-Assisted Health Misinformation Jailbreaks Against LLMs

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capacity of large language models (LLMs) to act as “jailbreak” attackers—i.e., to generate adversarial prompts that induce other LLMs to produce medically harmful misinformation. Method: We systematically audit 109 jailbreak prompts across three categories of target LLMs, evaluate generated content using standard ML-based detectors, and benchmark results against real-world health misinformation from Reddit. Contribution/Results: We find that LLM-generated medical misinformation exhibits higher logical coherence and semantic stealth than human-authored rumors, yet existing detectors retain non-negligible efficacy. Crucially, we provide the first empirical evidence that LLMs can serve *dual roles*: as sources of health misinformation *and* as effective detectors of AI-generated health misinformation. These findings establish an empirical foundation and methodological framework for developing LLM-augmented health information governance systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are a double-edged sword capable of generating harmful misinformation -- inadvertently, or when prompted by "jailbreak" attacks that attempt to produce malicious outputs. LLMs could, with additional research, be used to detect and prevent the spread of misinformation. In this paper, we investigate the efficacy and characteristics of LLM-produced jailbreak attacks that cause other models to produce harmful medical misinformation. We also study how misinformation generated by jailbroken LLMs compares to typical misinformation found on social media, and how effectively it can be detected using standard machine learning approaches. Specifically, we closely examine 109 distinct attacks against three target LLMs and compare the attack prompts to in-the-wild health-related LLM queries. We also examine the resulting jailbreak responses, comparing the generated misinformation to health-related misinformation on Reddit. Our findings add more evidence that LLMs can be effectively used to detect misinformation from both other LLMs and from people, and support a body of work suggesting that with careful design, LLMs can contribute to a healthier overall information ecosystem.
Problem

Research questions and friction points this paper is trying to address.

Investigating LLM-produced jailbreak attacks generating medical misinformation
Comparing jailbroken LLM misinformation to social media misinformation
Assessing standard ML approaches for detecting LLM-generated misinformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing LLM-produced jailbreak attacks on health misinformation
Comparing jailbreak misinformation to Reddit health misinformation
Using LLMs to detect misinformation from models and people
🔎 Similar Papers
No similar papers found.