🤖 AI Summary
This study identifies a cognition–practice gap in disclosure practices regarding generative AI use within digital humanities: while scholars widely acknowledge the ethical necessity of disclosure, actual reporting rates remain low, and significant disagreement persists on disclosure scope (e.g., data cleaning vs. text generation) and format (e.g., placement, granularity). Employing a mixed-methods investigation across multiple countries (N=XXX), combining surveys and in-depth interviews, this research provides the first empirical characterization of scholars’ disclosure attitudes, behaviors, and institutional expectations. A key finding is that most researchers advocate mandatory, standardized disclosure policies mandated by institutions or funding bodies—rather than relying on individual discretion. These findings furnish critical empirical grounding for developing cross-disciplinary AI transparency norms, advancing the field from ethical consensus toward institutionalized, accountable practice.
📝 Abstract
This survey study investigates how digital humanists perceive and approach generative AI disclosure in research. The results indicate that while digital humanities scholars acknowledge the importance of disclosing GenAI use, the actual rate of disclosure in research practice remains low. Respondents differ in their views on which activities most require disclosure and on the most appropriate methods for doing so. Most also believe that safeguards for AI disclosure should be established through institutional policies rather than left to individual decisions. The study's findings will offer empirical guidance to scholars, institutional leaders, funders, and other stakeholders responsible for shaping effective disclosure policies.