Reporting LLM Prompting in Automated Software Engineering: A Guideline Based on Current Practices and Expectations

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are increasingly deployed in automated software engineering, yet existing research commonly lacks systematic and transparent reporting of prompt designs, severely undermining reproducibility and comparability. Through an analysis of nearly 300 top-tier conference papers and a survey of 105 program committee members, this work presents the first systematic identification of critical omissions in prompt reporting, exposing deficiencies in disclosing prompt versions, justifying design choices, and addressing validity threats. Grounded in empirical evidence, we propose the first structured prompt reporting framework comprising three tiers—essential, recommended, and exemplary elements—to significantly enhance transparency and methodological rigor in LLM-driven software engineering research.

Technology Category

Application Category

📝 Abstract
Large Language Models, particularly decoder-only generative models such as GPT, are increasingly used to automate Software Engineering tasks. These models are primarily guided through natural language prompts, making prompt engineering a critical factor in system performance and behavior. Despite their growing role in SE research, prompt-related decisions are rarely documented in a systematic or transparent manner, hindering reproducibility and comparability across studies. To address this gap, we conducted a two-phase empirical study. First, we analyzed nearly 300 papers published at the top-3 SE conferences since 2022 to assess how prompt design, testing, and optimization are currently reported. Second, we surveyed 105 program committee members from these conferences to capture their expectations for prompt reporting in LLM-driven research. Based on the findings, we derived a structured guideline that distinguishes essential, desirable, and exceptional reporting elements. Our results reveal significant misalignment between current practices and reviewer expectations, particularly regarding version disclosure, prompt justification, and threats to validity. We present our guideline as a step toward improving transparency, reproducibility, and methodological rigor in LLM-based SE research.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Prompt Engineering
Software Engineering
Reproducibility
Research Transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt Engineering
Large Language Models
Software Engineering
Reproducibility
Reporting Guidelines
🔎 Similar Papers
No similar papers found.