Augmenting Expert Cognition in the Age of Generative AI: Insights from Document-Centric Knowledge Work

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The rise of generative AI risks eroding human domain expertise, particularly in literature-intensive knowledge work such as academic literature review writing and business document comprehension. Method: Drawing on cognitive science, workplace ethnography, and task-level analysis, this study develops the “selective delegation” theoretical framework—advocating AI automation of low-cognitive-load tasks (e.g., information retrieval) while preserving expert agency over high-order cognitive functions (e.g., synthesis, critical interpretation, and meaning construction). Four metacognitive augmentation design principles are derived to balance cognitive offloading with professional development. Contribution/Results: Empirical findings from human-AI collaboration studies elucidate experts’ boundary logic for delegating tasks to AI. The work provides both a theoretical foundation and practical guidelines for designing GenAI systems that authentically augment—not supplant—human expertise.

Technology Category

Application Category

📝 Abstract
As Generative AI (GenAI) capabilities expand, understanding how to preserve and develop human expertise while leveraging AI's benefits becomes increasingly critical. Through empirical studies in two contexts -- survey article authoring in scholarly research and business document sensemaking -- we examine how domain expertise shapes patterns of AI delegation and information processing among knowledge workers. Our findings reveal that while experts welcome AI assistance with repetitive information foraging tasks, they prefer to retain control over complex synthesis and interpretation activities that require nuanced domain understanding. We identify implications for designing GenAI systems that support expert cognition. These include enabling selective delegation aligned with expertise levels, preserving expert agency over critical analytical tasks, considering varying levels of domain expertise in system design, and supporting verification mechanisms that help users calibrate their reliance while deepening expertise. We discuss the inherent tension between reducing cognitive load through automation and maintaining the deliberate practice necessary for expertise development. Lastly, we suggest approaches for designing systems that provide metacognitive support, moving beyond simple task automation toward actively supporting expertise development. This work contributes to our understanding of how to design AI systems that augment rather than diminish human expertise in document-centric workflows.
Problem

Research questions and friction points this paper is trying to address.

Balancing AI assistance with human expertise preservation
Understanding expert preferences in AI task delegation
Designing AI systems to support expert cognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective AI delegation based on expertise levels
Preserving expert agency in analytical tasks
Metacognitive support for expertise development
🔎 Similar Papers
No similar papers found.