🤖 AI Summary
Current AI detection models exhibit severe misclassification when evaluating lightly AI-polished Arabic text, frequently flagging human-authored content as AI-generated—undermining the reliability of academic integrity assessments. Method: Addressing a critical gap in Arabic-language research, we introduce Ar-APT, the first large-scale Arabic paraphrased-text dataset (16,400 samples), constructed from 800 manually authored original documents. We systematically evaluate 14 large language models and leading commercial detectors on both original and lightly polished variants. Contribution/Results: While top-performing detectors achieve 92% accuracy on original texts, their performance collapses on lightly polished Arabic text—dropping to as low as 12%, with markedly elevated false-positive rates. This work provides the first empirical evidence of extreme sensitivity—and consequent unreliability—of existing AI detectors toward Arabic paraphrasing. It establishes a foundational benchmark and publicly available dataset to advance robust, multilingual AI detection research.
📝 Abstract
Many AI detection models have been developed to counter the presence of articles created by artificial intelligence (AI). However, if a human-authored article is slightly polished by AI, a shift will occur in the borderline decision of these AI detection models, leading them to consider it AI-generated article. This misclassification may result in falsely accusing authors of AI plagiarism and harm the credibility of AI detector models. In English, some efforts were made to meet this challenge, but not in Arabic. In this paper, we generated two datasets. The first dataset contains 800 Arabic articles, half AI-generated and half human-authored. We used it to evaluate 14 Large Language models (LLMs) and commercial AI detectors to assess their ability in distinguishing between human-authored and AI-generated articles. The best 8 models were chosen to act as detectors for our primary concern, which is whether they would consider slightly polished human text as AI-generated. The second dataset, Ar-APT, contains 400 Arabic human-authored articles polished by 10 LLMs using 4 polishing settings, totaling 16400 samples. We use it to evaluate the 8 nominated models and determine whether slight polishing will affect their performance. The results reveal that all AI detectors incorrectly attribute a significant number of articles to AI. The best performing LLM, Claude-4 Sonnet, achieved 83.51%, their performance decreased to 57.63% for articles slightly polished by LLaMA-3. Whereas for the best performing commercial model, originality.AI, that achieves 92% accuracy, dropped to 12% for articles slightly polished by Mistral or Gemma-3.