Knowing Isn't Understanding: Re-grounding Generative Proactivity with Epistemic and Behavioral Insight

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current generative AI agents respond only to explicit user queries and struggle to address users’ “unknown unknowns,” limiting their collaborative efficacy when goals are ambiguous or information is incomplete. This work proposes that generative proactive behavior must integrate both epistemic grounding—rooted in an awareness of cognitive limitations—and behavioral grounding, which enforces action constraints. Introducing the concept of “epistemic incompleteness” into human-AI collaboration for the first time, the framework synthesizes philosophical insights on ignorance with models of proactive agency to endow AI systems with capacities for cognitive reflection and behavioral restraint. Moving beyond conventional narrow proactivity paradigms that rely solely on historical behavior prediction, this approach establishes a theoretical foundation for responsible AI intervention under conditions of uncertainty and user cognitive limitations, thereby significantly enhancing the depth and safety of human-AI collaboration.

Technology Category

Application Category

📝 Abstract
Generative AI agents equate understanding with resolving explicit queries, an assumption that confines interaction to what users can articulate. This assumption breaks down when users themselves lack awareness of what is missing, risky, or worth considering. In such conditions, proactivity is not merely an efficiency enhancement, but an epistemic necessity. We refer to this condition as epistemic incompleteness: where progress depends on engaging with unknown unknowns for effective partnership. Existing approaches to proactivity remain narrowly anticipatory, extrapolating from past behavior and presuming that goals are already well defined, thereby failing to support users meaningfully. However, surfacing possibilities beyond a user's current awareness is not inherently beneficial. Unconstrained proactive interventions can misdirect attention, overwhelm users, or introduce harm. Proactive agents, therefore, require behavioral grounding: principled constraints on when, how, and to what extent an agent should intervene. We advance the position that generative proactivity must be grounded both epistemically and behaviorally. Drawing on the philosophy of ignorance and research on proactive behavior, we argue that these theories offer critical guidance for designing agents that can engage responsibly and foster meaningful partnerships.
Problem

Research questions and friction points this paper is trying to address.

epistemic incompleteness
generative proactivity
unknown unknowns
behavioral grounding
proactive AI agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative proactivity
epistemic incompleteness
behavioral grounding
unknown unknowns
human-AI collaboration