🤖 AI Summary
This study addresses the lack of a unified definition, consistent purpose, and standardized reporting practices for pilot studies in human-computer interaction (HCI), which has led to methodological ambiguity and inconsistent application. Employing a meta-review methodology, this work systematically analyzes how pilot studies are referenced and described in CHI conference papers through content analysis and thematic categorization, revealing their actual roles and current reporting practices. The findings indicate that most papers mention pilot studies only briefly, commonly omitting critical design details and feedback from results, thereby highlighting a significant gap in methodological transparency within the HCI community. This research is the first to systematically expose the conceptual vagueness and reporting deficiencies surrounding pilot studies, laying the groundwork for establishing a standardized definition and reporting framework.
📝 Abstract
Pilot studies (PS) are ubiquitous in HCI research. CHI papers routinely reference'pilot studies','pilot tests', or'preliminary studies'to justify design decisions, verify procedures, or motivate methodological choices. Yet despite their frequency, the role of pilot studies in HCI remains conceptually vague and empirically underexamined. Unlike fields such as medicine, nursing, and education, where pilot and feasibility studies have well-established definitions, guidelines, reporting standards and even a dedicated research journal, the CHI community lacks a shared understanding of what constitutes a pilot study, why they are conducted, and how they should be reported. Many papers reference pilots'in passing', without details about design, outcomes, or how the pilot informed the main study. This variability suggests a methodological blind spot in our community.