🤖 AI Summary
To address weakened user control and insufficient transparency arising from autonomous AI agent execution, this paper proposes an automated permission management framework for AI agents. Methodologically, we conduct a user study to identify key contextual factors influencing authorization decisions and develop a lightweight machine learning model that jointly incorporates contextual signals and individual preferences, enabling rapid low-shot adaptation. Our key contribution is the empirical discovery that user authorization behavior exhibits cross-contextual consistency and inter-user similarity—enabling robust authorization modeling with minimal historical data. Experimental results show that the model achieves an overall prediction accuracy of 85.1%, rising to 94.4% for high-confidence predictions. Moreover, incorporating merely 1–4 annotated samples improves accuracy by 10.8 percentage points. This framework significantly enhances both controllability and explainability of AI agent data access from the user’s perspective.
📝 Abstract
As AI agents attempt to autonomously act on users' behalf, they raise transparency and control issues. We argue that permission-based access control is indispensable in providing meaningful control to the users, but conventional permission models are inadequate for the automated agentic execution paradigm. We therefore propose automated permission management for AI agents. Our key idea is to conduct a user study to identify the factors influencing users' permission decisions and to encode these factors into an ML-based permission management assistant capable of predicting users' future decisions. We find that participants' permission decisions are influenced by communication context but importantly individual preferences tend to remain consistent within contexts, and align with those of other participants. Leveraging these insights, we develop a permission prediction model achieving 85.1% accuracy overall and 94.4% for high-confidence predictions. We find that even without using permission history, our model achieves an accuracy of 66.9%, and a slight increase of training samples (i.e., 1-4) can substantially increase the accuracy by 10.8%.