🤖 AI Summary
This paper addresses the growing risks of intensified surveillance, user lock-in, and monopolistic entrenchment exacerbated by platform-dominated AI agents. To safeguard user autonomy and ensure alignment with user interests, it proposes the “user-sovereign AI agent” (or *agent advocate*) paradigm. Methodologically, it systematically establishes four governance pathways: open access to computational resources, interoperable protocol design, adoption of verifiable AI safety standards, and antitrust-aware regulatory sandboxes. Its key innovation lies in introducing the novel concept of *agent advocates*, coupled with a focus on governance-layer technical implementation—including protocol-level interoperability specifications, a modular AI safety standard framework, inclusive compute provisioning mechanisms, and collaborative regulatory tooling. The work delivers a comprehensive governance blueprint for user-centered AI agents, providing both theoretical foundations and actionable consensus for building next-generation intelligent agent ecosystems that balance innovation, fairness, and individual autonomy. (149 words)
📝 Abstract
Language model agents could reshape how users navigate and act in digital environments. If controlled by platform companies -- either those that already dominate online search, communication, and commerce, or those vying to replace them -- platform agents could intensify surveillance, exacerbate user lock-in, and further entrench the incumbent digital giants. This position paper argues that to resist the undesirable effects of platform agents, we should champion agent advocates -- agents that are controlled by users, serve the interests of users, and preserve user autonomy and choice. We identify key interventions to enable agent advocates: ensuring public access to compute, developing interoperability protocols and safety standards, and implementing appropriate market regulations.