Safety Game: Balancing Safe and Informative Conversations with Blackbox Agentic AI using LP Solvers

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of balancing safety and informativeness in large language models (LLMs) without access to model internals or retraining. Methodologically, it formulates the safety–helpfulness trade-off as a two-player zero-sum game and computes the optimal strategy balance via minimax equilibrium solving; during inference, it dynamically generates responses by integrating a linear programming solver with an LLM agent. Contributions include: (1) achieving model-agnostic, scalable real-time safety control; (2) enabling flexible third-party deployment of customizable safety policies; and (3) empirically validating the feasibility of efficient, lightweight alignment under black-box conditions. The framework significantly lowers deployment barriers—requiring no model modification, fine-tuning, or architectural changes—and is particularly suitable for resource-constrained organizations. Experimental results demonstrate robust safety enforcement while preserving task utility across diverse benchmarks, confirming its practicality for production-grade, policy-driven LLM governance.

Technology Category

Application Category

📝 Abstract
Ensuring that large language models (LLMs) comply with safety requirements is a central challenge in AI deployment. Existing alignment approaches primarily operate during training, such as through fine-tuning or reinforcement learning from human feedback, but these methods are costly and inflexible, requiring retraining whenever new requirements arise. Recent efforts toward inference-time alignment mitigate some of these limitations but still assume access to model internals, which is impractical, and not suitable for third party stakeholders who do not have access to the models. In this work, we propose a model-independent, black-box framework for safety alignment that does not require retraining or access to the underlying LLM architecture. As a proof of concept, we address the problem of trading off between generating safe but uninformative answers versus helpful yet potentially risky ones. We formulate this dilemma as a two-player zero-sum game whose minimax equilibrium captures the optimal balance between safety and helpfulness. LLM agents operationalize this framework by leveraging a linear programming solver at inference time to compute equilibrium strategies. Our results demonstrate the feasibility of black-box safety alignment, offering a scalable and accessible pathway for stakeholders, including smaller organizations and entities in resource-constrained settings, to enforce safety across rapidly evolving LLM ecosystems.
Problem

Research questions and friction points this paper is trying to address.

Balancing safe but uninformative versus helpful yet risky AI responses
Enforcing safety without model retraining or internal architecture access
Enabling third-party safety alignment for black-box AI systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box safety alignment without model retraining
Game theory formulation for safety-helpfulness tradeoff
Linear programming solver for equilibrium computation
🔎 Similar Papers
No similar papers found.