π€ AI Summary
This study systematically investigates, for the first time, the problem of distinguishing human-authored from LLM-generated scientific ideasβa critical challenge in scientific creativity attribution. We evaluate state-of-the-art detection models on scientific ideas subjected to multiple rounds of human paraphrasing. Results show that five successive paraphrasing iterations substantially degrade detection performance, reducing average accuracy by 25.4%; notably, non-expert, simplified paraphrasing styles prove most confounding. Incorporating the original research question as contextual prompt improves detection accuracy by up to 2.97%. The findings reveal that semantic-preserving yet representation-shifting paraphrasing poses a fundamental challenge to LLM provenance tracing, exposing the limited robustness of current detectors in scientifically grounded, cognitively nuanced scenarios. This work provides novel empirical evidence on the cognitive boundaries of LLM-assisted scientific ideation and underscores the need for more semantically resilient attribution methods.
π Abstract
With the increasing reliance on LLMs as research agents, distinguishing between LLM and human-generated ideas has become crucial for understanding the cognitive nuances of LLMs' research capabilities. While detecting LLM-generated text has been extensively studied, distinguishing human vs LLM-generated scientific idea remains an unexplored area. In this work, we systematically evaluate the ability of state-of-the-art (SOTA) machine learning models to differentiate between human and LLM-generated ideas, particularly after successive paraphrasing stages. Our findings highlight the challenges SOTA models face in source attribution, with detection performance declining by an average of 25.4% after five consecutive paraphrasing stages. Additionally, we demonstrate that incorporating the research problem as contextual information improves detection performance by up to 2.97%. Notably, our analysis reveals that detection algorithms struggle significantly when ideas are paraphrased into a simplified, non-expert style, contributing the most to the erosion of distinguishable LLM signatures.