🤖 AI Summary
This study investigates the perceived usability, utility, and adoption disparities of generative AI—specifically Microsoft 365 Copilot—among employees in non-academic research institutions, with a focus on role-based differences in user experience and effectiveness. Through repeated cross-sectional surveys, the research examines Copilot’s ease of use, output quality, reliability, and task-specific utility in authentic knowledge work settings. Findings indicate that administrative staff rate its practicality and reliability more favorably, while researchers increasingly recognize its capacity to reduce workload and enhance productivity over time. The tool demonstrates particular value in structured text-related tasks. These results highlight the learning and routinization effects of generative AI in knowledge work and underscore the need for role-sensitive, context-aware deployment strategies, tailored training, and adaptive governance to support sustainable adoption.
📝 Abstract
The study analyzes the introduction of Microsoft 365 Copilot in a non-university research organization using a repeated cross-sectional employee survey. We assess usefulness, ease of use, output quality and reliability, and usefulness for typical knowledge-work activities. Administrative staff report higher usefulness and reliability, whereas scientific staff develop more positive assessments over time, especially regarding productivity and workload reduction. Copilot is widely viewed as user-friendly and technically reliable, with greatest added value for clearly structured, text-based tasks. The findings highlight learning and routinization effects when embedding generative AI into work processes and stress the need for context-sensitive implementation, role-specific training and governance to foster sustainable acceptance of generative AI in knowledge-intensive organizations.