Malicious Repurposing of Open Science Artefacts by Using Large Language Models

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the underexplored risk that large language models (LLMs) can be misused to maliciously reframe open scientific outputs—such as datasets, methods, and tools—into harmful research proposals, a threat inadequately mitigated by current safety mechanisms. The authors propose an end-to-end framework that first employs persuasion-based jailbreaking techniques to bypass LLM safeguards, then automatically parses NLP research papers to reinterpret their openly shared artifacts into adversarial proposals. Risk is systematically evaluated along three dimensions: potential harm, feasibility, and technical plausibility. This work is the first to demonstrate how ethically intended open science can be repurposed for dual-use applications via LLMs. Using a multi-model evaluation suite (GPT-4.1, Gemini-2.5-pro, Grok-3), the study reveals significant divergence among leading LLMs in risk assessment, underscoring the irreplaceable role of human oversight.

Technology Category

Application Category

📝 Abstract
The rapid evolution of large language models (LLMs) has fuelled enthusiasm about their role in advancing scientific discovery, with studies exploring LLMs that autonomously generate and evaluate novel research ideas. However, little attention has been given to the possibility that such models could be exploited to produce harmful research by repurposing open science artefacts for malicious ends. We fill the gap by introducing an end-to-end pipeline that first bypasses LLM safeguards through persuasion-based jailbreaking, then reinterprets NLP papers to identify and repurpose their artefacts (datasets, methods, and tools) by exploiting their vulnerabilities, and finally assesses the safety of these proposals using our evaluation framework across three dimensions: harmfulness, feasibility of misuse, and soundness of technicality. Overall, our findings demonstrate that LLMs can generate harmful proposals by repurposing ethically designed open artefacts; however, we find that LLMs acting as evaluators strongly disagree with one another on evaluation outcomes: GPT-4.1 assigns higher scores (indicating greater potential harms, higher soundness and feasibility of misuse), Gemini-2.5-pro is markedly stricter, and Grok-3 falls between these extremes. This indicates that LLMs cannot yet serve as reliable judges in a malicious evaluation setup, making human evaluation essential for credible dual-use risk assessment.
Problem

Research questions and friction points this paper is trying to address.

large language models
open science artefacts
malicious repurposing
dual-use risk
harmful research proposals
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM jailbreaking
dual-use risk
open science artefacts
malicious repurposing
AI safety evaluation