🤖 AI Summary
This work addresses the challenge of ensuring real-time flight safety for highly dynamic autonomous systems—such as fighter jets—by preventing violations of critical operational boundaries, including g-force limits, altitude constraints, and geofences. To this end, the authors propose Guardrails, a runtime safety framework grounded in Control Barrier Function (CBF) theory, which seamlessly integrates pilot or AI-generated commands with safety-preserving control actions within a closed-loop system. The architecture guarantees safety with minimal intervention, thereby preserving operator authority to the greatest extent possible. Notably, this study presents the first successful application of CBF-based safety enforcement on a real F-16 fighter aircraft. Flight tests demonstrate that Guardrails effectively enforces multiple complex safety constraints while maintaining high fidelity to pilot intent.
📝 Abstract
The advancement of autonomous systems -- from legged robots to self-driving vehicles and aircraft -- necessitates executing increasingly high-performance and dynamic motions without ever putting the system or its environment in harm's way. In this paper, we introduce Guardrails -- a novel runtime assurance mechanism that guarantees dynamic safety for autonomous systems, allowing them to safely evolve on the edge of their operational domains. Rooted in the theory of control barrier functions, Guardrails offers a control strategy that carefully blends commands from a human or AI operator with safe control actions to guarantee safe behavior. To demonstrate its capabilities, we implemented Guardrails on an F-16 fighter jet and conducted flight tests where Guardrails supervised a human pilot to enforce g-limits, altitude bounds, geofence constraints, and combinations thereof. Throughout extensive flight testing, Guardrails successfully ensured safety, keeping the pilot in control when safe to do so and minimally modifying unsafe pilot inputs otherwise.