An Investigation on How AI-Generated Responses Affect SoftwareEngineering Surveys

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies systematic threats to data authenticity and validity in software engineering surveys posed by large language model (LLM) misuse. Method: We conducted two empirical surveys on Prolific (2025), integrating Scribbr’s AI detector, structured text comparison, narrative feature profiling, and qualitative pattern analysis to identify characteristic artifacts of AI-generated responses—including repetitive sequences, lexical homogenization, and superficial personalization. Contribution/Results: We introduce “data authenticity” as a novel validity dimension for software engineering survey research and propose a dual-verification framework combining automated detection with qualitative interpretation. We further advocate transparent reporting standards and community-level governance mechanisms. Empirical results demonstrate that AI-generated content significantly undermines construct validity, internal validity, and external validity—thereby establishing a methodological foundation and practical guidelines for safeguarding empirical research integrity in software engineering.

Technology Category

Application Category

📝 Abstract
Survey research is a fundamental empirical method in software engineering, enabling the systematic collection of data on professional practices, perceptions, and experiences. However, recent advances in large language models (LLMs) have introduced new risks to survey integrity, as participants can use generative tools to fabricate or manipulate their responses. This study explores how LLMs are being misused in software engineering surveys and investigates the methodological implications of such behavior for data authenticity, validity, and research integrity. We collected data from two survey deployments conducted in 2025 through the Prolific platform and analyzed the content of participants' answers to identify irregular or falsified responses. A subset of responses suspected of being AI generated was examined through qualitative pattern inspection, narrative characterization, and automated detection using the Scribbr AI Detector. The analysis revealed recurring structural patterns in 49 survey responses indicating synthetic authorship, including repetitive sequencing, uniform phrasing, and superficial personalization. These false narratives mimicked coherent reasoning while concealing fabricated content, undermining construct, internal, and external validity. Our study identifies data authenticity as an emerging dimension of validity in software engineering surveys. We emphasize that reliable evidence now requires combining automated and interpretive verification procedures, transparent reporting, and community standards to detect and prevent AI generated responses, thereby protecting the credibility of surveys in software engineering.
Problem

Research questions and friction points this paper is trying to address.

Investigates AI misuse in software engineering surveys
Examines data authenticity and validity risks from AI responses
Proposes methods to detect and prevent fabricated survey answers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using automated AI detection tools for response verification
Combining qualitative pattern inspection with narrative analysis
Implementing transparent reporting and community standards
🔎 Similar Papers
No similar papers found.