The Erosion of LLM Signatures: Can We Still Distinguish Human and LLM-Generated Scientific Ideas After Iterative Paraphrasing?

πŸ“… 2025-12-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study systematically investigates, for the first time, the problem of distinguishing human-authored from LLM-generated scientific ideasβ€”a critical challenge in scientific creativity attribution. We evaluate state-of-the-art detection models on scientific ideas subjected to multiple rounds of human paraphrasing. Results show that five successive paraphrasing iterations substantially degrade detection performance, reducing average accuracy by 25.4%; notably, non-expert, simplified paraphrasing styles prove most confounding. Incorporating the original research question as contextual prompt improves detection accuracy by up to 2.97%. The findings reveal that semantic-preserving yet representation-shifting paraphrasing poses a fundamental challenge to LLM provenance tracing, exposing the limited robustness of current detectors in scientifically grounded, cognitively nuanced scenarios. This work provides novel empirical evidence on the cognitive boundaries of LLM-assisted scientific ideation and underscores the need for more semantically resilient attribution methods.

Technology Category

Application Category

πŸ“ Abstract
With the increasing reliance on LLMs as research agents, distinguishing between LLM and human-generated ideas has become crucial for understanding the cognitive nuances of LLMs' research capabilities. While detecting LLM-generated text has been extensively studied, distinguishing human vs LLM-generated scientific idea remains an unexplored area. In this work, we systematically evaluate the ability of state-of-the-art (SOTA) machine learning models to differentiate between human and LLM-generated ideas, particularly after successive paraphrasing stages. Our findings highlight the challenges SOTA models face in source attribution, with detection performance declining by an average of 25.4% after five consecutive paraphrasing stages. Additionally, we demonstrate that incorporating the research problem as contextual information improves detection performance by up to 2.97%. Notably, our analysis reveals that detection algorithms struggle significantly when ideas are paraphrased into a simplified, non-expert style, contributing the most to the erosion of distinguishable LLM signatures.
Problem

Research questions and friction points this paper is trying to address.

Detect LLM vs human-generated scientific ideas after paraphrasing
Evaluate SOTA models' ability to distinguish idea sources
Assess impact of iterative paraphrasing on detection performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating SOTA models for idea source attribution
Incorporating research context to improve detection performance
Analyzing paraphrasing impact on LLM signature erosion
πŸ”Ž Similar Papers
No similar papers found.