Cognitive Guardrails for Open-World Decision Making in Autonomous Drone Swarms

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Small UAV swarms deployed in open-world missions—such as post-disaster search and rescue—suffer from limited unknown-object recognition and semantic reasoning capabilities of conventional vision systems; direct integration of large language models (LLMs) risks hallucination, compromising decision safety. To address this, we propose a “Cognitive Guardrail” mechanism: the first framework to tightly integrate LLM-based semantic reasoning with formal safety constraints, task-relevance verification, and uncertainty quantification—yielding a verifiable, controllable high-level autonomous decision-making architecture. Our approach synergistically combines computer vision, LLMs, multi-agent coordination, and probabilistic uncertainty modeling. Evaluated in both simulation and real-world deployments, it reduces mission failure rate by 76% and achieves over 92% accuracy in critical target response, significantly enhancing trustworthiness of semantic understanding and adaptive decision-making for UAV swarms in dynamic, open environments.

Technology Category

Application Category

📝 Abstract
Small Uncrewed Aerial Systems (sUAS) are increasingly deployed as autonomous swarms in search-and-rescue and other disaster-response scenarios. In these settings, they use computer vision (CV) to detect objects of interest and autonomously adapt their missions. However, traditional CV systems often struggle to recognize unfamiliar objects in open-world environments or to infer their relevance for mission planning. To address this, we incorporate large language models (LLMs) to reason about detected objects and their implications. While LLMs can offer valuable insights, they are also prone to hallucinations and may produce incorrect, misleading, or unsafe recommendations. To ensure safe and sensible decision-making under uncertainty, high-level decisions must be governed by cognitive guardrails. This article presents the design, simulation, and real-world integration of these guardrails for sUAS swarms in search-and-rescue missions.
Problem

Research questions and friction points this paper is trying to address.

Autonomous drone swarms struggle with unfamiliar object recognition
LLMs provide insights but risk unsafe recommendations
Cognitive guardrails ensure safe open-world decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs for object reasoning in drone swarms
Implements cognitive guardrails for safe decisions
Combines CV and LLMs for open-world adaptation
🔎 Similar Papers
No similar papers found.
Jane Cleland-Huang
Jane Cleland-Huang
University of Notre Dame
Software TraceabilityRequirements EngineeringSafety AssuranceCyber-Physical SystemsUAV
P
Pedro Alarcon Granadeno
Computer Science and Engineering, University of Notre Dame, USA
A
Arturo Miguel Russell Bernal
Computer Science and Engineering, University of Notre Dame, USA
D
Demetrius Hernandez
Computer Science and Engineering, University of Notre Dame, USA
M
Michael Murphy
Computer Science and Engineering, University of Notre Dame, USA
M
Maureen Petterson
Computer Science and Engineering, University of Notre Dame, USA
W
Walter J. Scheirer
Computer Science and Engineering, University of Notre Dame, USA