Conformal Policy Control

๐Ÿ“… 2026-03-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In high-stakes environments, agent exploration is often terminated due to violations of safety constraints, while excessive conservatism hinders performance gains. This work proposes a conformal calibration method grounded in a safe reference policy that dynamically modulates the aggressiveness of a new policy according to a user-specified risk tolerance. Without requiring assumptions about the correctness of the model class or fine-tuned hyperparameters, the approach provides, for the first time, finite-sample safety guarantees for non-monotonic bounded constraint functions. By overcoming the limitations of traditional conservative optimization and existing conformal methods, it enables โ€œsafe-on-deploymentโ€ exploration while simultaneously improving policy performance, as demonstrated in tasks ranging from natural language question answering to biomolecular engineering.

Technology Category

Application Category

๐Ÿ“ Abstract
An agent must try new behaviors to explore and improve. In high-stakes environments, an agent that violates safety constraints may cause harm and must be taken offline, curtailing any future interaction. Imitating old behavior is safe, but excessive conservatism discourages exploration. How much behavior change is too much? We show how to use any safe reference policy as a probabilistic regulator for any optimized but untested policy. Conformal calibration on data from the safe policy determines how aggressively the new policy can act, while provably enforcing the user's declared risk tolerance. Unlike conservative optimization methods, we do not assume the user has identified the correct model class nor tuned any hyperparameters. Unlike previous conformal methods, our theory provides finite-sample guarantees even for non-monotonic bounded constraint functions. Our experiments on applications ranging from natural language question answering to biomolecular engineering show that safe exploration is not only possible from the first moment of deployment, but can also improve performance.
Problem

Research questions and friction points this paper is trying to address.

safe exploration
conformal control
policy safety
risk tolerance
behavior change
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformal Policy Control
safe exploration
conformal calibration
risk tolerance
finite-sample guarantee
๐Ÿ”Ž Similar Papers
No similar papers found.