π€ AI Summary
The increasing use of artificial intelligence (AI) in scientific research is hindered by inconsistent and often inadequate disclosure practices, as authors face social, cognitive, and emotional barriers, and current policies offer insufficient support. This study proposes DAISY, a form-based structured disclosure tool developed through a co-design approach that conceptualizes AI disclosure as a sociotechnical practice. Integrating insights from a literature review, co-design workshops (N=11), and user studies (N=31), DAISY draws on principles from human-computer interaction and responsible AI. Findings demonstrate that DAISY significantly enhances the completeness of disclosure statements and the clarity of AI usage details, while preserving authorsβ comfort in reporting. The tool thus offers a scalable, practical solution to improve transparency in research involving AI.
π Abstract
The use of AI tools in research is becoming routine, alongside growing consensus that such use should be transparently disclosed. However, AI disclosure statements remain rare and inconsistent, with policies offering limited guidance and authors facing social, cognitive, and emotional barriers when reporting AI use. To explore how structured disclosure shapes what authors report and how they experience disclosure, we present DAISY (Disclosure of AI-uSe in Your Research), a form-based tool for generating AI disclosure statements. DAISY was developed from literature-derived requirements and co-design (N =11), and deployed in a user study with authors (N=31). DAISY-supported disclosures met more completeness criteria, offering clearer breakdowns of AI use across research and writing than unsupported disclosures. Surprisingly, despite concerns about how transparently disclosed AI use might be perceived, the use of DAISY did not reduce author comfort with the disclosure statements. We discuss design implications and a research agenda for AI disclosure as a sociotechnical practice.