AdversaRiskQA: An Adversarial Factuality Benchmark for High-Risk Domains

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) to adversarial misinformation in high-stakes domains such as healthcare, finance, and law, where existing evaluation benchmarks lack domain-specific rigor. The authors propose the first adversarial factuality evaluation framework tailored for high-risk applications, featuring two difficulty levels that inject false information with varying confidence via adversarial prompts. The framework integrates domain-expert validation with automated factuality assessment algorithms to systematically evaluate models including Qwen, GPT-OSS, and the GPT series. Experimental results show that Qwen3 (80B) achieves the highest accuracy after filtering out vacuous responses, while GPT-5 demonstrates robust performance. Model performance improves nonlinearly with scale, exhibits significant inter-domain variation, and shows reduced difficulty gaps as model size increases. Notably, factual consistency in long-form outputs shows no significant correlation with injected misinformation.

Technology Category

Application Category

📝 Abstract
Hallucination in large language models (LLMs) remains an acute concern, contributing to the spread of misinformation and diminished public trust, particularly in high-risk domains. Among hallucination types, factuality is crucial, as it concerns a model's alignment with established world knowledge. Adversarial factuality, defined as the deliberate insertion of misinformation into prompts with varying levels of expressed confidence, tests a model's ability to detect and resist confidently framed falsehoods. Existing work lacks high-quality, domain-specific resources for assessing model robustness under such adversarial conditions, and no prior research has examined the impact of injected misinformation on long-form text factuality. To address this gap, we introduce AdversaRiskQA, the first verified and reliable benchmark systematically evaluating adversarial factuality across Health, Finance, and Law. The benchmark includes two difficulty levels to test LLMs'defensive capabilities across varying knowledge depths. We propose two automated methods for evaluating the adversarial attack success and long-form factuality. We evaluate six open- and closed-source LLMs from the Qwen, GPT-OSS, and GPT families, measuring misinformation detection rates. Long-form factuality is assessed on Qwen3 (30B) under both baseline and adversarial conditions. Results show that after excluding meaningless responses, Qwen3 (80B) achieves the highest average accuracy, while GPT-5 maintains consistently high accuracy. Performance scales non-linearly with model size, varies by domains, and gaps between difficulty levels narrow as models grow. Long-form evaluation reveals no significant correlation between injected misinformation and the model's factual output. AdversaRiskQA provides a valuable benchmark for pinpointing LLM weaknesses and developing more reliable models for high-stakes applications.
Problem

Research questions and friction points this paper is trying to address.

hallucination
factuality
adversarial misinformation
high-risk domains
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial factuality
hallucination detection
high-risk domains
long-form factuality
LLM benchmark
A
Adam Szelestey
Eindhoven University of Technology
S
Sofie van Engelen
Eindhoven University of Technology
T
Tianhao Huang
Eindhoven University of Technology
J
Justin Snelders
Eindhoven University of Technology
Q
Qintao Zeng
Eindhoven University of Technology
Songgaojun Deng
Songgaojun Deng
Eindhoven University of Technology
Machine LearningData Mining