🤖 AI Summary
This work addresses the security risks and potential for capability misuse arising from current language models that expose all functionalities through a single API, thereby violating the principle of least privilege. To remedy this, the authors introduce Nested Least-Privilege Networks (NLPN), the first framework to apply the system security principle of least privilege to language models. NLPN employs a three-tier monitor-assign-execute architecture that dynamically controls accessible internal computational pathways during inference. Leveraging a rank-indexed, shape-preserving intervention technique, the method enables fine-grained, reversible, and smooth privilege modulation—departing from conventional paradigms that restrict capabilities only at the output layer. Experiments demonstrate a clear utility–privilege trade-off frontier, showing effective suppression of specific capabilities under diverse policies while significantly minimizing collateral performance degradation, all without requiring model retraining or deployment of multiple specialized models.
📝 Abstract
Least privilege is a core security principle: grant each request only the minimum access needed to achieve its goal. Deployed language models almost never follow it, instead being exposed through a single API endpoint that serves all users and requests. This gap exists not because least privilege would be unhelpful; deployments would benefit greatly from reducing unnecessary capability exposure. The real obstacle is definitional and mechanistic: what does"access"mean inside a language model, and how can we enforce it without retraining or deploying multiple models? We take inspiration from least privilege in computer systems and define a class of models called least-privilege language models, where privilege is reachable internal computation during the forward pass. In this view, lowering privilege literally shrinks the model's accessible function class, as opposed to denying access via learned policies. We formalize deployment-time control as a monitor-allocator-enforcer stack, separating (i) request-time signals, (ii) a decision rule that allocates privilege, and (iii) an inference-time mechanism that selects privilege. We then propose Nested Least-Privilege Networks, a shape-preserving, rank-indexed intervention that provides a smooth, reversible control knob. We show that this knob yields policy-usable privilege-utility frontiers and enables selective suppression of targeted capabilities with limited collateral degradation across various policies. Most importantly, we argue for a new deployment paradigm that challenges the premise that language models can only be controlled at the output level.